Our research seeks to empower individuals and organizations to control how their data is used. We use techniques from cryptography, programming languages, machine learning, operating systems, and other areas to both understand and improve the security of computing as practiced today, and as envisioned in the future.

SRG
lunch
Security Research Group Lunch (12 December 2017)
Haina Li, Felix Park, Mainuddin Jonas, Anant Kharkar, Faysal Hossain Shezan, Fnu Suya,
David Evans, Yuan Tian, Riley Spahn, Weilin Xu, Guy "Jack" Verrier

Everyone is welcome at our research group meetings. To get announcements, join our Slack Group (any @virginia.edu email address can join themsleves, or email me to request an invitation).

Projects

Secure Multi-Party Computation
Obliv-C · MightBeEvil
Web and Mobile Security
ScriptInspector · SSOScan
Adversarial Machine Learning
EvadeML
Past Projects
Side-Channel Analysis · Perracotta · Splint
N-Variant Systems · Physicrypt · Social Networking APIs

News

Secure Multi-Party Computation: Promises, Protocols, and Practicalities

27 June 2017

I gave a talk at ECRYPT NET: Workshop on Crypto for the Cloud & Implementation (which was combined with Paris Crypto Day) on our group’s work on secure multi-party computation, using Bargav Jayaraman and Hannah Li‘s recent work on decentralizing certificate authorities as a motivating application.




Adversarial Machine Learning: Are We Playing the Wrong Game?

10 June 2017

I gave a talk at Berkeley’s International Computer Science Institute on Adversarial Machine Learning: Are We Playing the Wrong Game? (8 June 2017), focusing on the work Weilin Xu has been doing (in collaboration with myself and Yanjun Qi) on adversarial machine learning.



Abstract

Machine learning classifiers are increasingly popular for security applications, and often achieve outstanding performance in testing. When deployed, however, classifiers can be thwarted by motivated adversaries who adaptively construct adversarial examples that exploit flaws in the classifier’s model. Much work on adversarial examples, including Carlini and Wagner’s attacks which are the best results to date, has focused on finding small distortions to inputs that fool a classifier. Previous defenses have been both ineffective and very expensive in practice. In this talk, I’ll describe a new very simple strategy, feature squeezing, that can be used to harden classifiers by detecting adversarial examples. Feature squeezing reduces the search space available to an adversary by coalescing samples that correspond to many different inputs in the original space into a single sample. Adversarial examples can be detected by comparing the model’s predictions on the original and squeezed sample. In practice, of course, adversaries are not limited to small distortions in a particular metric space. Indeed, it may be possible to make large changes to an input without losing its intended malicious behavior. We have developed an evolutionary framework to search for such adversarial examples, and demonstrated that it can automatically find evasive variants against state-of-the-art classifiers. This suggests that work on adversarial machine learning needs a better definition of adversarial examples, and to make progress towards understanding how classifiers and oracles perceive samples differently.


Modest Proposals for Google

9 June 2017

Great to meet up with Wahooglers Adrienne Porter Felt, Ben Kreuter, Jonathan McCune, Samee Zahur (Google’s latest addition from my group), and (honorary UVAer interning at Google this summer) Riley Spahn at Google’s Research Summit on Security and Privacy this week in Mountain View.

As part of the meeting, the academic attendees were given a chance to give a 3-minute pitch to tell Google what we want them to do. The slides I used are below, but probably don’t make much sense by themselves.

The main modest proposal I tried to make is that Google should take it on as their responsibility to make sure nothing bad ever happens to anyone anywhere. They can start with nothing bad ever happening on the Internet, but with the Internet pretty much everywhere, should expand the scope to cover everywhere soon.

To start with an analogy from the days when Microsoft ruled computing. There was a time when Windows bluescreens were a frequent experience for most Windows users (and at the time, this pretty much mean all computer users). Microsoft analyzed the crashes and concluded that nearly all were because of bugs in device drivers, so it wasn’t their fault and was horribly unfair for them to be blamed for the crashes. Of course, to people losing their work because of a crash, it doesn’t really matter who’s code was to blame. By the end of the 90s, though, Microsoft took on the mission of reducing the problems with device drivers, and a lot of great work came out of this (e.g., the Static Driver Verifier), with dramatic improvements on the typical end user’s computing experience.

Today, Google rules a large chunk of computing. Lots of bad things happen on the Internet that are not Google’s fault. As the latest example in the news, the leaked NSA report of Russian attacks on election officials describes a phishing attack that exploits vulnerabilities in Microsoft Word. Its easy to put the blame on overworked election officials who didn’t pay enough attention to books on universal computation they read when they were children, or to put it on Microsoft for allowing Word to be exploited.

But, Google’s name is also all over this report – the emails when through gmail accounts, the attacks phished for Google credentials, and the attackers used plausibly-named gmail accounts. Even if Google isn’t too blame for the problems that enable such an attack, they are uniquely positioned to solve it, both because of their engineering capabilities and resources, but also because of the comprehensive view they have of what happens on the Internet and powerful ability to influence it.

Google is a big company, with lots of decentralized teams, some of which definitely seem to get this already. (I’d point to the work the Chrome Security Team has done, MOAR TLS, and RAPPOR as just a few of many examples of things that involve a mix of techincal and engineering depth and a broad mission to make computing better for everyone, not obviously connected to direct business interests.) But, there are also lots of places where Google doesn’t seem to be putting serious efforts into solving problems they could but viewing them as outside scope because its really someone else’s fault (my particular motivating example was PDF malware). As a company, Google is too capable, important, and ubiquitous to view problems as out-of-scope just because they are obviously undecidable or obviously really someone else’s fault.



[Also on Google +]


Feature Squeezing: Detecting Adversarial Examples

10 April 2017

Although deep neural networks (DNNs) have achieved great success in many computer vision tasks, recent studies have shown they are vulnerable to adversarial examples. Such examples, typically generated by adding small but purposeful distortions, can frequently fool DNN models. Previous studies to defend against adversarial examples mostly focused on refining the DNN models. They have either shown limited success or suffer from expensive computation. We propose a new strategy, feature squeezing, that can be used to harden DNN models by detecting adversarial examples. Feature squeezing reduces the search space available to an adversary by coalescing samples that correspond to many different feature vectors in the original space into a single sample.

By comparing a DNN model’s prediction on the original input with that on the squeezed input, feature squeezing detects adversarial examples with high accuracy and few false positives. If the original and squeezed examples produce substantially different outputs from the model, the input is likely to be adversarial. By measuring the disagreement among predictions and selecting a threshold value, our system outputs the correct prediction for legitimate examples and rejects adversarial inputs.

So far, we have explored two instances of feature squeezing: reducing the color bit depth of each pixel and smoothing using a spatial filter. These strategies are straightforward, inexpensive, and complementary to defensive methods that operate on the underlying model, such as adversarial training.

The figure shows the histogram of the L1 scores on the MNIST dataset between the original and squeezed sample, for 1000 non-adversarial examples as well as 1000 adversarial examples generated using both the Fast Gradient Sign Method and the Jacobian-based Saliency Map Approach. Over the full MNIST testing set, the detection accuracy is 99.74% (only 22 out of 5000 fast positives).

For more information, see the paper:

Weilin Xu, David Evans, Yanjun Qi. Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks. arXiv preprint, 4 April 2017. [PDF]

Project Site: EvadeML


Enigma 2017 Talk: Classifiers under Attack

6 March 2017

The video for my Enigma 2017 talk, “Classifiers under Attack” is now posted:



The talk focuses on work with Weilin Xu and Yanjun Qi on automatically evading malware classifiers using techniques from genetic programming. (See EvadeML.org for more details and links to code and papers, although some of the work I talked about at Enigma has not yet been published.)

Enigma was an amazing conference – one of the most worthwhile, and definitely the most diverse security/privacy conference I’ve been to in my career, both in terms of where people were coming from (nearly exactly 50% from industry and 50% from academic/government/non-profits), intellectual variety (range of talks from systems and crypto to neuroscience, law, and journalism), and the demographics of the attendees and speakers (not to mention a way-cool stage setup).

The model of having speakers do on-line practice talks with their session was also very valuable (Enigma requires speakers to agree to do three on-line practice talks sessions before the conference, and from what I hear most speakers and sessions did cooperate with this, and it showed in the quality of the sessions) and something I hope other conference will be able to adopt. You actually end up with talks that fit with each other, build of things others present, and avoid unnecessary duplication, as well as, improving all the talks by themselves.


CCS 2017

18 January 2017

I’m program co-chair, with Tal Malkin and Dongyan Xu, for ACM CCS 2017.

The conference will be in Dallas, 30 Oct – 3 Nov 2017. Paper submissions are due May 19 (8:29PM PDT). It’ll be a while before the CFP is ready, but the conference website is now up!




Growth of MPC Research

13 January 2017

I led a discussion breakout at the NSF SaTC PIs meeting on Secure Computation: Progress, Methods, Challenges, and Open Questions. To set up the session, I looked at the number of papers in Google scholar that match "secure computation" OR "multi-party computation" (which seems like a fairly good measure of research activity in the area):

There were 1800 MPC papers published in 2015! This means in one year, there are most papers published on MPC than there were from the beginning of time through the end of 2004. Gotta get reading…


Aggregating Private Sparse Learning Models Using Multi-Party Computation

9 December 2016

Bargav Jayaraman presented on privacy-preserving sparse learning at the Private Multi‑Party Machine Learning workshop attached to NIPS 2016 in Barcelona.



A short paper summarizing the work is: Lu Tian, Bargav Jayaraman, Quanquan Gu, and David Evans. Aggregating Private Sparse Learning Models Using Multi-Party Computation [PDF, 6 pages].

At the workshop, Jack Doerner also presented a talk on An Introduction to Practical Multiparty Computation.


O’Reilly Security 2016: Classifiers Under Attack

4 November 2016

I gave a talk on Weilin Xu’s work (in collaboration with Yanjun Qi) on evading machine learning classifiers at the O’Reilly Security Conference in New York: Classifiers Under Attack, 2 November 2016.

Machine-learning models are popular in security tasks such as malware detection, network intrusion detection, and spam detection. These models can achieve extremely high accuracy on test datasets and are widely used in practice.

However, these results are for particular test datasets. Unlike other fields, security tasks involve adversaries responding to the classifier. For example, attackers may try to generate new malware deliberately designed to evade existing classifiers. This breaks the assumption of machine-learning models that the training data and the operational data share the same data distribution. As a result, it is important to consider attackers’ efforts to disrupt or evade the generated models.

David Evans provides an introduction to the techniques adversaries use to circumvent machine-learning classifiers and presents case studies of machine classifiers under attack. David then outlines methods for automatically predicting the robustness of a classifier when used in an adversarial context and techniques that may be used to harden a classifier to decrease its vulnerability to attackers.




Demystifying the Blockchain Hype

26 October 2016

I gave a talk introducing the blockchain at a meetup hosted by Willow Tree Apps:
Demystifying the Blockchain Hype, 25 October 2016.

Over the past few years, explosive growth in cryptocurrencies (especially Bitcoin), has led to tremendous excitement about blockchains as a powerful tool for just about everything. Without assuming anyprevious background in cryptography, I’ll explain the cryptographic and networking technologies that make blockchains possible, explain why people are so excited about blockchains, but why you shouldn’t believe everything you hear about them.

The slides are below (I believe a recording will also be available soon).