Archive for the 'Research' Category

Mutually Assured Destruction and the Impending AI Apocalypse

Monday, August 13th, 2018

I gave a keynote talk at USENIX Workshop of Offensive Technologies, Baltimore, Maryland, 13 August 2018.

The title and abstract are what I provided for the WOOT program, but unfortunately (or maybe fortunately for humanity!) I wasn’t able to actually figure out a talk to match the title and abstract I provided.

The history of security includes a long series of arms races, where a new technology emerges and is subsequently developed and exploited by both defenders and attackers. Over the past few years, “Artificial Intelligence” has re-emerged as a potentially transformative technology, and deep learning in particular has produced a barrage of amazing results. We are in the very early stages of understanding the potential of this technology in security, but more worryingly, seeing how it may be exploited by malicious individuals and powerful organizations. In this talk, I’ll look at what lessons might be learned from previous security arms races, consider how asymmetries in AI may be exploited by attackers and defenders, touch on some recent work in adversarial machine learning, and hopefully help progress-loving Luddites figure out how to survive in a world overrun by AI doppelgängers, GAN gangs, and gibbon-impersonating pandas.

Dependable and Secure Machine Learning

Saturday, July 7th, 2018

I co-organized, with Homa Alemzadeh and Karthik Pattabiraman, a workshop on trustworthy machine learning attached to DSN 2018, in Luxembourg: DSML: Dependable and Secure Machine Learning.

SRG at IEEE S&P 2018

Tuesday, May 29th, 2018

Group Dinner


Including our newest faculty member, Yongwhi Kwon, joining UVA in Fall 2018!

Yuan Tian, Fnu Suya, Mainuddin Jonas, Yongwhi Kwon, David Evans, Weihang Wang, Aihua Chen, Weilin Xu

Poster Session


Fnu Suya (with Yuan Tian and David Evans), Adversaries Don’t Care About Averages: Batch Attacks on Black-Box Classifiers [PDF]

Mainuddin Jonas (with David Evans), Enhancing Adversarial Example Defenses Using Internal Layers [PDF]

Huawei STW: Lessons from the Last 3000 Years of Adversarial Examples

Wednesday, May 23rd, 2018

I spoke on Lessons from the Last 3000 Years of Adversarial Examples at Huawei’s Strategy and Technology Workshop in Shenzhen, China, 15 May 2018.

We also got to tour Huawei’s new research and development campus, under construction about 40 minutes from Shenzhen. It is pretty close to Disneyland, with its own railroad and villages themed after different European cities (Paris, Bologna, etc.).



Huawei’s New Research and Development Campus [More Pictures]

Unfortunately, pictures were not allowed on our tour of the production line. Not so surprising that nearly all of the work was done by machines, but was surprising to me how much of the human work left is completely robotic. The human workers (called “operators”) are mostly scanning QR codes on parts, and following the directions that light up with they do, or scanning bins and following directions on a screen to collect parts from bins and scanning them when they are put into the bin. This is the kind of system that leads to remarkably high production quality. The parts are mostly delivered on tapes that are fed into the machines, and many machines along the line are primarily for testing. There is a “bottleneck” marker that is placed on any points that are holding up the production line.

The public (at least to the factory) “grapey board” keeps track of the happiness of the workers — each operator puts up a smiley (or frowny) face on the board to show their mood for the day, monitored carefully by the managers. There is a batch of grapes to show performance for the month. If an operator does something good, a grape is colored green; if they do something bad, a grape is colored black. There was quite a bit of discussion among the people on the tour (mostly US and European-based professors) if such a management approach would be a good idea for our research groups… (or for department chairs for their faculty!)



In front of Huawei’s “White House”, with Battista Biggio [More Pictures]

Feature Squeezing at NDSS

Sunday, February 25th, 2018

Weilin Xu presented Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks at the Network and Distributed System Security Symposium 2018. San Diego, CA. 21 February 2018.



Paper: Weilin Xu, David Evans, Yanjun Qi. Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks. NDSS 2018. [PDF]

Project Site

Highlights from CCS 2017

Saturday, November 18th, 2017

The 24th ACM Conference on Computer and Communications Security was held in Dallas, 30 October – 3 November. Being Program Committee co-chair for a conference like this is a full-year commitment, and the work continues throughout much of the year preceding the conference. The conference has over 1000 registered attendees, a record for any academic security research conference.

Here are a few highlights from the conference week.



PC Chairs’ Welcome (opening session)



Giving the PC Chairs’ Welcome Talk



Audience at Opening Session



ACM CCS 2017 Paper Awards Finalists



CCS 2017 Awards Banquet




At the Award’s Banquet, I got to award a Best Paper award to SRG alum Jack Doerner (I was, of course, recused by conflict from being involved in any decisions on his paper).




UVA Lunch (around the table starting at front left): Suman Jana (honorary Wahoo by marriage), Darion Cassel (SRG BSCS 2017, now at CMU), Will Hawkins, Jason Hiser, Samee Zahur (SRG PhD 2016, now at Google), Jack Doerner (SRG BACS 2016, now at Northeastern), Joe Calandrino (now at FTC); Back right to front: Ben Kreuter (now at Google), Anh Nguyen-Tuong, Jack Davidson, Yuan Tian, Yuchen Zhou (SRG PhD 2015, now at Palo Alto Networks), David Evans.

First Workshop for Women in Cybersecurity

Friday, November 17th, 2017

I gave a talk at the First ACM Workshop for Women in Cybersecurity (affiliated with ACM CCS 2017) on Truth, Social Justice (and the American Way?):




There’s also a short paper, loosely related to the talk: [PDF]





Alumna-Turned-Internet Security Expert Listed Among Nation’s Top Young Innovators

Friday, September 22nd, 2017

Adrienne Porter Felt (SRG BSCS 2008) was selected as one of Technology Review’s 35 Innovators Under 35.

UVA Today has an article:Alumna-Turned-Internet Security Expert Listed Among Nation’s Top Young Innovators, UVA Today, 21 September 2017.

Felt started working in security when she was a second-year engineering student, responding to a request from computer science professor David Evans, who taught the “Program and Data Representation” course. Evans said Felt stood out amongst her peers because of her “well-thought-out answers and meticulous diagrams.”

“For the summer after her second year, she joined a project one of my Ph.D. students was working on to use the disk drive controller to detect malware based on the reads and writes it makes that are visible to the disk,” Evans said. “She did great work on that project, and by the end of the summer was envisioning her own research ideas.

“She came up with the idea of looking at privacy issues in Facebook applications, which, back in 2007, was just emerging, and no one else was yet looking into privacy issues like this.”

Taking Evans’ offer for a research project was a turning point in Felt’s life, showing her something she liked that she could do well.

“It turned out that I really loved it,” she said. “I like working in privacy and security because I enjoy helping people control their digital experiences. I think of it as, ‘I’m professionally paranoid, so that other people don’t need to be.’”

In her final semester as an undergraduate student at UVA, Felt taught a student-led class on web browsers.

“Her work at Google has dramatically changed the way web browsers convey security information to users, making the web safer for everyone,” Evans said. “Her team at Google has been studying deployment of HTTPS, the protocol that allows web clients to securely communicate with servers, and has had fantastic success in improving security of websites worldwide, as well as a carefully designed plan to use browser interfaces to further encourage adoption of secure web protocols.

SRG at USENIX Security 2017

Saturday, August 12th, 2017

Several SRG students presented posters at USENIX Security Symposium in Vancouver, BC.


Approaches to Evading Windows PE Malware Classifiers
Anant Kharkar, Helen Simecek, Weilin Xu, David Evans, and Hyrum S. Anderson (Endgame)

JSPolicy: Policied Sandboxes for Untrusted Third-Party JavaScript
Ethan Lowman and David Evans
EvadeML-Zoo: A Benchmarking and Visualization Tool for Adversarial Machine Learning
Weilin Xu, Andrew Norton, Noah Kim, Yanjun Qi, and David Evans
Decentralized Certificate Authorities
Hannah Li, Bargav Jayaraman, and David Evans

In the Red Corner…

Monday, August 7th, 2017

The Register has a story on the work Anant Kharkar and collaborators at Endgame, Inc. are doing on using reinforcement learning to find evasive malware: In the red corner: Malware-breeding AI. And in the blue corner: The AI trying to stop it, by Katyanna Quach, The Register, 2 August 2017.



Antivirus makers want you to believe they are adding artificial intelligence to their products: software that has learned how to catch malware on a device. There are two potential problems with that. Either it’s marketing hype and not really AI – or it’s true, in which case don’t forget that such systems can still be hoodwinked.

It’s relatively easy to trick machine-learning models – especially in image recognition. Change a few pixels here and there, and an image of a bus can be warped so that the machine thinks it’s an ostrich. Now take that thought and extend it to so-called next-gen antivirus.

The researchers from Endgame and the University of Virginia are hoping that by integrating the malware-generating system into OpenAI’s Gym platform, more developers will help sniff out more adversarial examples to improve machine-learning virus classifiers.

Although Evans believes that Endgame’s research is important, using such a method to beef up security “reflects the immaturity” of AI and infosec. “It’s mostly experimental and the effectiveness of defenses is mostly judged against particular known attacks, but doesn’t say much about whether it can work against newly discovered attacks,” he said.

“Moving forward, we need more work on testing machine learning systems, reasoning about their robustness, and developing general methods for hardening classifiers that are not limited to defending against particular attacks. More broadly, we need ways to measure and build trustworthiness in AI systems.”

The research has been summarized as a paper, here if you want to check it out in more detail, or see the upstart’s code on Github.