Archive for the 'Research' Category

Highlights from CCS 2017

Saturday, November 18th, 2017

The 24th ACM Conference on Computer and Communications Security was held in Dallas, 30 October – 3 November. Being Program Committee co-chair for a conference like this is a full-year commitment, and the work continues throughout much of the year preceding the conference. The conference has over 1000 registered attendees, a record for any academic security research conference.

Here are a few highlights from the conference week.



PC Chairs’ Welcome (opening session)



Giving the PC Chairs’ Welcome Talk



Audience at Opening Session



ACM CCS 2017 Paper Awards Finalists



CCS 2017 Awards Banquet




At the Award’s Banquet, I got to award a Best Paper award to SRG alum Jack Doerner (I was, of course, recused by conflict from being involved in any decisions on his paper).




UVA Lunch (around the table starting at front left): Suman Jana (honorary Wahoo by marriage), Darion Cassel (SRG BSCS 2017, now at CMU), Will Hawkins, Jason Hiser, Samee Zahur (SRG PhD 2016, now at Google), Jack Doerner (SRG BACS 2016, now at Northeastern), Joe Calandrino (now at FTC); Back right to front: Ben Kreuter (now at Google), Anh Nguyen-Tuong, Jack Davidson, Yuan Tian, Yuchen Zhou (SRG PhD 2015, now at Palo Alto Networks), David Evans.

First Workshop for Women in Cybersecurity

Friday, November 17th, 2017

I gave a talk at the First ACM Workshop for Women in Cybersecurity (affiliated with ACM CCS 2017) on Truth, Social Justice (and the American Way?):




There’s also a short paper, loosely related to the talk: [PDF]





Alumna-Turned-Internet Security Expert Listed Among Nation’s Top Young Innovators

Friday, September 22nd, 2017

Adrienne Porter Felt (SRG BSCS 2008) was selected as one of Technology Review’s 35 Innovators Under 35.

UVA Today has an article:Alumna-Turned-Internet Security Expert Listed Among Nation’s Top Young Innovators, UVA Today, 21 September 2017.

Felt started working in security when she was a second-year engineering student, responding to a request from computer science professor David Evans, who taught the “Program and Data Representation” course. Evans said Felt stood out amongst her peers because of her “well-thought-out answers and meticulous diagrams.”

“For the summer after her second year, she joined a project one of my Ph.D. students was working on to use the disk drive controller to detect malware based on the reads and writes it makes that are visible to the disk,” Evans said. “She did great work on that project, and by the end of the summer was envisioning her own research ideas.

“She came up with the idea of looking at privacy issues in Facebook applications, which, back in 2007, was just emerging, and no one else was yet looking into privacy issues like this.”

Taking Evans’ offer for a research project was a turning point in Felt’s life, showing her something she liked that she could do well.

“It turned out that I really loved it,” she said. “I like working in privacy and security because I enjoy helping people control their digital experiences. I think of it as, ‘I’m professionally paranoid, so that other people don’t need to be.’”

In her final semester as an undergraduate student at UVA, Felt taught a student-led class on web browsers.

“Her work at Google has dramatically changed the way web browsers convey security information to users, making the web safer for everyone,” Evans said. “Her team at Google has been studying deployment of HTTPS, the protocol that allows web clients to securely communicate with servers, and has had fantastic success in improving security of websites worldwide, as well as a carefully designed plan to use browser interfaces to further encourage adoption of secure web protocols.

SRG at USENIX Security 2017

Saturday, August 12th, 2017

Several SRG students presented posters at USENIX Security Symposium in Vancouver, BC.


Approaches to Evading Windows PE Malware Classifiers
Anant Kharkar, Helen Simecek, Weilin Xu, David Evans, and Hyrum S. Anderson (Endgame)

JSPolicy: Policied Sandboxes for Untrusted Third-Party JavaScript
Ethan Lowman and David Evans
EvadeML-Zoo: A Benchmarking and Visualization Tool for Adversarial Machine Learning
Weilin Xu, Andrew Norton, Noah Kim, Yanjun Qi, and David Evans
Decentralized Certificate Authorities
Hannah Li, Bargav Jayaraman, and David Evans

In the Red Corner…

Monday, August 7th, 2017

The Register has a story on the work Anant Kharkar and collaborators at Endgame, Inc. are doing on using reinforcement learning to find evasive malware: In the red corner: Malware-breeding AI. And in the blue corner: The AI trying to stop it, by Katyanna Quach, The Register, 2 August 2017.



Antivirus makers want you to believe they are adding artificial intelligence to their products: software that has learned how to catch malware on a device. There are two potential problems with that. Either it’s marketing hype and not really AI – or it’s true, in which case don’t forget that such systems can still be hoodwinked.

It’s relatively easy to trick machine-learning models – especially in image recognition. Change a few pixels here and there, and an image of a bus can be warped so that the machine thinks it’s an ostrich. Now take that thought and extend it to so-called next-gen antivirus.

The researchers from Endgame and the University of Virginia are hoping that by integrating the malware-generating system into OpenAI’s Gym platform, more developers will help sniff out more adversarial examples to improve machine-learning virus classifiers.

Although Evans believes that Endgame’s research is important, using such a method to beef up security “reflects the immaturity” of AI and infosec. “It’s mostly experimental and the effectiveness of defenses is mostly judged against particular known attacks, but doesn’t say much about whether it can work against newly discovered attacks,” he said.

“Moving forward, we need more work on testing machine learning systems, reasoning about their robustness, and developing general methods for hardening classifiers that are not limited to defending against particular attacks. More broadly, we need ways to measure and build trustworthiness in AI systems.”

The research has been summarized as a paper, here if you want to check it out in more detail, or see the upstart’s code on Github.

CISPA Distinguished Lecture

Wednesday, July 12th, 2017

I gave a talk at CISPA in Saarbrücken, Germany, on our work with Weilin Xu and Yanjun Qi on Adversarial Machine Learning: Are We Playing the Wrong Game?.




Abstract

Machine learning classifiers are increasingly popular for security applications, and often achieve outstanding performance in testing. When deployed, however, classifiers can be thwarted by motivated adversaries who adaptively construct adversarial examples that exploit flaws in the classifier’s model. Much work on adversarial examples has focused on finding small distortions to inputs that fool a classifier. Previous defenses have been both ineffective and very expensive in practice. In this talk, I’ll describe a new very simple strategy, feature squeezing, that can be used to harden classifiers by detecting adversarial examples. Feature squeezing reduces the search space available to an adversary by coalescing samples that correspond to many different inputs in the original space into a single sample. Adversarial examples can be detected by comparing the model’s predictions on the original and squeezed sample. In practice, of course, adversaries are not limited to small distortions in a particular metric space. Indeed, in security applications like malware detection it may be possible to make large changes to an input without disrupting its intended malicious behavior. I’ll report on an evolutionary framework we have developed to search for such adversarial examples that can automatically find evasive variants against state-of-the-art classifiers. This suggests that work on adversarial machine learning needs a better definition of adversarial examples, and to make progress towards understanding how classifiers and oracles perceive samples differently.

Horcrux Is a Password Manager Designed for Security and Paranoid Users

Friday, July 7th, 2017

Bleeping Computer has an article about our work on a more secure password manager: Horcrux Is a Password Manager Designed for Security and Paranoid Users, 4 July 2017.


Two researchers from the University of Virginia have developed a new password manager prototype that works quite differently from existing password manager clients.

The research team describes their password manager — which they named Horcrux — as “a password manager for paranoids,” due to its security and privacy-focused features and a unique design used for handling user passwords, both while in transit and at rest.

There are two main differences between Horcrux and currently available password manager clients.

The first is how Horcrux inserts user credentials inside web pages. Regular password managers do this by filling in the login form with the user’s data.

The second feature that makes Horcrux stand out compared to other password manager clients is how it stores user credentials.

Compared to classic solutions, Horcrux doesn’t trust one single password store but spreads user credentials across multiple servers. This means that if an attacker manages to gain access to one of the servers, he won’t gain access to all of the user’s passwords, limiting the damage of any security incident.

More details about the Horcrux design and implementation are available in the research team’s paper, entitled “Horcrux: A Password Manager for Paranoids”.

Secure Multi-Party Computation: Promises, Protocols, and Practicalities

Tuesday, June 27th, 2017

I gave a talk at ECRYPT NET: Workshop on Crypto for the Cloud & Implementation (which was combined with Paris Crypto Day) on our group’s work on secure multi-party computation, using Bargav Jayaraman and Hannah Li‘s recent work on decentralizing certificate authorities as a motivating application.



Adversarial Machine Learning: Are We Playing the Wrong Game?

Saturday, June 10th, 2017

I gave a talk at Berkeley’s International Computer Science Institute on Adversarial Machine Learning: Are We Playing the Wrong Game? (8 June 2017), focusing on the work Weilin Xu has been doing (in collaboration with myself and Yanjun Qi) on adversarial machine learning.



Abstract

Machine learning classifiers are increasingly popular for security applications, and often achieve outstanding performance in testing. When deployed, however, classifiers can be thwarted by motivated adversaries who adaptively construct adversarial examples that exploit flaws in the classifier’s model. Much work on adversarial examples, including Carlini and Wagner’s attacks which are the best results to date, has focused on finding small distortions to inputs that fool a classifier. Previous defenses have been both ineffective and very expensive in practice. In this talk, I’ll describe a new very simple strategy, feature squeezing, that can be used to harden classifiers by detecting adversarial examples. Feature squeezing reduces the search space available to an adversary by coalescing samples that correspond to many different inputs in the original space into a single sample. Adversarial examples can be detected by comparing the model’s predictions on the original and squeezed sample. In practice, of course, adversaries are not limited to small distortions in a particular metric space. Indeed, it may be possible to make large changes to an input without losing its intended malicious behavior. We have developed an evolutionary framework to search for such adversarial examples, and demonstrated that it can automatically find evasive variants against state-of-the-art classifiers. This suggests that work on adversarial machine learning needs a better definition of adversarial examples, and to make progress towards understanding how classifiers and oracles perceive samples differently.

Modest Proposals for Google

Friday, June 9th, 2017

Great to meet up with Wahooglers Adrienne Porter Felt, Ben Kreuter, Jonathan McCune, Samee Zahur (Google’s latest addition from my group), and (honorary UVAer interning at Google this summer) Riley Spahn at Google’s Research Summit on Security and Privacy this week in Mountain View.

As part of the meeting, the academic attendees were given a chance to give a 3-minute pitch to tell Google what we want them to do. The slides I used are below, but probably don’t make much sense by themselves.

The main modest proposal I tried to make is that Google should take it on as their responsibility to make sure nothing bad ever happens to anyone anywhere. They can start with nothing bad ever happening on the Internet, but with the Internet pretty much everywhere, should expand the scope to cover everywhere soon.

To start with an analogy from the days when Microsoft ruled computing. There was a time when Windows bluescreens were a frequent experience for most Windows users (and at the time, this pretty much mean all computer users). Microsoft analyzed the crashes and concluded that nearly all were because of bugs in device drivers, so it wasn’t their fault and was horribly unfair for them to be blamed for the crashes. Of course, to people losing their work because of a crash, it doesn’t really matter who’s code was to blame. By the end of the 90s, though, Microsoft took on the mission of reducing the problems with device drivers, and a lot of great work came out of this (e.g., the Static Driver Verifier), with dramatic improvements on the typical end user’s computing experience.

Today, Google rules a large chunk of computing. Lots of bad things happen on the Internet that are not Google’s fault. As the latest example in the news, the leaked NSA report of Russian attacks on election officials describes a phishing attack that exploits vulnerabilities in Microsoft Word. Its easy to put the blame on overworked election officials who didn’t pay enough attention to books on universal computation they read when they were children, or to put it on Microsoft for allowing Word to be exploited.

But, Google’s name is also all over this report – the emails when through gmail accounts, the attacks phished for Google credentials, and the attackers used plausibly-named gmail accounts. Even if Google isn’t too blame for the problems that enable such an attack, they are uniquely positioned to solve it, both because of their engineering capabilities and resources, but also because of the comprehensive view they have of what happens on the Internet and powerful ability to influence it.

Google is a big company, with lots of decentralized teams, some of which definitely seem to get this already. (I’d point to the work the Chrome Security Team has done, MOAR TLS, and RAPPOR as just a few of many examples of things that involve a mix of techincal and engineering depth and a broad mission to make computing better for everyone, not obviously connected to direct business interests.) But, there are also lots of places where Google doesn’t seem to be putting serious efforts into solving problems they could but viewing them as outside scope because its really someone else’s fault (my particular motivating example was PDF malware). As a company, Google is too capable, important, and ubiquitous to view problems as out-of-scope just because they are obviously undecidable or obviously really someone else’s fault.



[Also on Google +]