Our research seeks to empower individuals and organizations to control how their data is used. We use techniques from cryptography, programming languages, machine learning, operating systems, and other areas to both understand and improve the security of computing as practiced today, and as envisioned in the future.

Security Research Group Lunch (12 December 2017)
Haina Li, Felix Park, Mainuddin Jonas, Anant Kharkar, Faysal Hossain Shezan, Suya,
David Evans, Yuan Tian, Riley Spahn, Weilin Xu, Guy "Jack" Verrier

Everyone is welcome at our research group meetings. To get announcements, join our Slack Group (any @virginia.edu email address can join themsleves, or email me to request an invitation).


Secure Multi-Party Computation
Obliv-C · MightBeEvil
Practical Secure Computation
Web and Mobile Security
ScriptInspector · SSOScan
Adversarial Machine Learning
Past Projects
Side-Channel Analysis · Perracotta · Splint
N-Variant Systems · Physicrypt · Social Networking APIs


Feature Squeezing at NDSS

25 February 2018

Weilin Xu presented Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks at the Network and Distributed System Security Symposium 2018. San Diego, CA. 21 February 2018.

Paper: Weilin Xu, David Evans, Yanjun Qi. Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks. NDSS 2018. [PDF]

Project Site

Why hasn’t Cross-Site Scripting been solved?

31 December 2017

By Haina Li


In 2017, Bugcrowd reported that cross-site scripting (XSS) remains as the number one vulnerability found on the web, accounting for 25% of the bugs found and submitted to the bug bounty program. Additionally, XSS has remained in the top 3 on the list of the web’s top vulnerabilities for the recent years. Over the 17 years since XSS was first recognized by Microsoft in 2000, XSS has been the focus of intense academic research and development of penetration testing tools, yet we are still finding vulnerabilities even in top websites such as Facebook and Google. In this blog post, we explore some of the reasons why XSS is still a major problem today.

XSS has evolved

XSS evolved while modern applications became more complex than the static pages that they once were. While reflected and stored XSS have not disappeared because both server and client-side logic have become more elaborate, the pattern of replacing server-side logic with client-side JavaScript gave rise to DOM-Based vulnerabilities. Additionally, server-side XSS prevention tools that examined deviations between the request and response (XSSDS) do not work for DOM-Based vulnerabilities because the entire flow of malicious data from the source to the sink is contained within the browser and do not go through the server.

New methods that do prevent DOM-Based XSS attacks include XSS Filters and CSP. These myriad of sophisticated tools aimed to achieve the seemingly simple purpose of escaping user-provided content. As it stands currently, these tools are not able to catch all XSS vulnerabilities, and escaping everything all the time would break an web application altogether. For example, a recent work by Lekis et al. [PDF]
describes a new attack that was missed by every existing XSS prevention technique. In the new attack, the injected payload is benign-looking HTML but can be transformed by script gadgets to behave maliciously.

The effectiveness of web penetration tools are limited

In a study of automated black-box web application vulnerability testing by Bau et al. [PDF], researchers tested commercial scanners such as McAfee and IBM and found that the average scanner XSS vulnerability detection rates were 62.5, 15. and 11.25, respectively, for reflected, stored, and advanced XSS that used non-standard tags and keywords. The study found that the scanners were effective in finding straightforward, textbook XSS vulnerabilities, but lack sufficient modeling of more complex XSS with respect to the specific web application. Web application scanners are designed using a reactive approach, converting new vulnerabilities into test vectors only after they’ve become a problem. When it comes to stored XSS, XSS scanners also struggle to link an event to a subsequent, later observation. These scanners are also often difficult to configure and often take too long if they were set to fuzz every possible location in a large and complicated web application.


As with most web vulnerabilities, XSS is not going away anytime soon because of the constant evolving technologies of the web and the challenges in developing penetration tools with high true-positive rates. However, we may be able to eliminate most of the client-side security issues by replacing JavaScript with a new language that exhibits better control-flow integrity, such as WebAssembly.

Muzzammil Zaveri on Forbes 30 under 30

6 December 2017

Muzzammil Zaveri (BACS 2011) has been recognized by Forbes Magazine as one of the top 30 venture capitalists under 30. As an undergraduate researcher, Muzzammil worked on Guardrails (secure web application framework).

Forbes Recognition

UVa Today Article: Meet the 5 Alumni on Forbes’ new ‘30 under 30’ Lists, 15 November 2017.

Cavalier Daily Article: Forbes 30 under 30 recognizes five U.Va alumni, 4 December 2017.

Zaveri stressed the importance of pursuing passion and making positive use of free time while studying as an undergraduate.

“There’s nothing like being in a setting where you can make mistakes and explore interests,” he said. “Doing something that you’re strictly passionate about may not be the most productive — you can explore interests and area that you might be passionate about and that can be a great springboard into your own career, or whatever you decide to pursue in life after school.”

Zaveri believes he was very lucky with the connections he made at the University, especially with meeting his co-founder, Ethan Fast. He credits Evans, his advisor with empowering him with knowledge and encouraging him to learn more about tech startups.

“[Evans] really encouraged and spent time diving into startups and exploring some of my interests in building side projects,” he said. “And through that I met my co-founder [Ethan Fast] and ultimately, we ended up starting Proxino together.”

Letter to DHS

18 November 2017

I was one of 54 signatories on a letter organized by Alvaro Bedoya (from Georgetown University Law Center) from technology experts to DHS (Acting) Secretary Elaine Duke in opposition to the proposed plans to use algorithms to identify undesirable individuals as part of the Extreme Vetting Initiative: [PDF]. The Brennan Center’s Web page provides a lot of resources supporting the letter.

Some media coverage:

Highlights from CCS 2017

18 November 2017

The 24th ACM Conference on Computer and Communications Security was held in Dallas, 30 October – 3 November. Being Program Committee co-chair for a conference like this is a full-year commitment, and the work continues throughout much of the year preceding the conference. The conference has over 1000 registered attendees, a record for any academic security research conference.

Here are a few highlights from the conference week.

PC Chairs’ Welcome (opening session)

Giving the PC Chairs’ Welcome Talk

Audience at Opening Session

ACM CCS 2017 Paper Awards Finalists

CCS 2017 Awards Banquet

At the Award’s Banquet, I got to award a Best Paper award to SRG alum Jack Doerner (I was, of course, recused by conflict from being involved in any decisions on his paper).

UVA Lunch (around the table starting at front left): Suman Jana (honorary Wahoo by marriage), Darion Cassel (SRG BSCS 2017, now at CMU), Will Hawkins, Jason Hiser, Samee Zahur (SRG PhD 2016, now at Google), Jack Doerner (SRG BACS 2016, now at Northeastern), Joe Calandrino (now at FTC); Back right to front: Ben Kreuter (now at Google), Anh Nguyen-Tuong, Jack Davidson, Yuan Tian, Yuchen Zhou (SRG PhD 2015, now at Palo Alto Networks), David Evans.

First Workshop for Women in Cybersecurity

17 November 2017

I gave a talk at the First ACM Workshop for Women in Cybersecurity (affiliated with ACM CCS 2017) on Truth, Social Justice (and the American Way?):

There’s also a short paper, loosely related to the talk: [PDF]

Alumna-Turned-Internet Security Expert Listed Among Nation’s Top Young Innovators

22 September 2017

Adrienne Porter Felt (SRG BSCS 2008) was selected as one of Technology Review’s 35 Innovators Under 35.

UVA Today has an article:Alumna-Turned-Internet Security Expert Listed Among Nation’s Top Young Innovators, UVA Today, 21 September 2017.

Felt started working in security when she was a second-year engineering student, responding to a request from computer science professor David Evans, who taught the “Program and Data Representation” course. Evans said Felt stood out amongst her peers because of her “well-thought-out answers and meticulous diagrams.”

“For the summer after her second year, she joined a project one of my Ph.D. students was working on to use the disk drive controller to detect malware based on the reads and writes it makes that are visible to the disk,” Evans said. “She did great work on that project, and by the end of the summer was envisioning her own research ideas.

“She came up with the idea of looking at privacy issues in Facebook applications, which, back in 2007, was just emerging, and no one else was yet looking into privacy issues like this.”

Taking Evans’ offer for a research project was a turning point in Felt’s life, showing her something she liked that she could do well.

“It turned out that I really loved it,” she said. “I like working in privacy and security because I enjoy helping people control their digital experiences. I think of it as, ‘I’m professionally paranoid, so that other people don’t need to be.’”

In her final semester as an undergraduate student at UVA, Felt taught a student-led class on web browsers.

“Her work at Google has dramatically changed the way web browsers convey security information to users, making the web safer for everyone,” Evans said. “Her team at Google has been studying deployment of HTTPS, the protocol that allows web clients to securely communicate with servers, and has had fantastic success in improving security of websites worldwide, as well as a carefully designed plan to use browser interfaces to further encourage adoption of secure web protocols.

SRG at USENIX Security 2017

12 August 2017

Several SRG students presented posters at USENIX Security Symposium in Vancouver, BC.

Approaches to Evading Windows PE Malware Classifiers
Anant Kharkar, Helen Simecek, Weilin Xu, David Evans, and Hyrum S. Anderson (Endgame)

JSPolicy: Policied Sandboxes for Untrusted Third-Party JavaScript
Ethan Lowman and David Evans
EvadeML-Zoo: A Benchmarking and Visualization Tool for Adversarial Machine Learning
Weilin Xu, Andrew Norton, Noah Kim, Yanjun Qi, and David Evans
Decentralized Certificate Authorities
Hannah Li, Bargav Jayaraman, and David Evans

In the Red Corner…

7 August 2017

The Register has a story on the work Anant Kharkar and collaborators at Endgame, Inc. are doing on using reinforcement learning to find evasive malware: In the red corner: Malware-breeding AI. And in the blue corner: The AI trying to stop it, by Katyanna Quach, The Register, 2 August 2017.

Antivirus makers want you to believe they are adding artificial intelligence to their products: software that has learned how to catch malware on a device. There are two potential problems with that. Either it’s marketing hype and not really AI – or it’s true, in which case don’t forget that such systems can still be hoodwinked.

It’s relatively easy to trick machine-learning models – especially in image recognition. Change a few pixels here and there, and an image of a bus can be warped so that the machine thinks it’s an ostrich. Now take that thought and extend it to so-called next-gen antivirus.

The researchers from Endgame and the University of Virginia are hoping that by integrating the malware-generating system into OpenAI’s Gym platform, more developers will help sniff out more adversarial examples to improve machine-learning virus classifiers.

Although Evans believes that Endgame’s research is important, using such a method to beef up security “reflects the immaturity” of AI and infosec. “It’s mostly experimental and the effectiveness of defenses is mostly judged against particular known attacks, but doesn’t say much about whether it can work against newly discovered attacks,” he said.

“Moving forward, we need more work on testing machine learning systems, reasoning about their robustness, and developing general methods for hardening classifiers that are not limited to defending against particular attacks. More broadly, we need ways to measure and build trustworthiness in AI systems.”

The research has been summarized as a paper, here if you want to check it out in more detail, or see the upstart’s code on Github.

CISPA Distinguished Lecture

12 July 2017

I gave a talk at CISPA in Saarbrücken, Germany, on our work with Weilin Xu and Yanjun Qi on Adversarial Machine Learning: Are We Playing the Wrong Game?.


Machine learning classifiers are increasingly popular for security applications, and often achieve outstanding performance in testing. When deployed, however, classifiers can be thwarted by motivated adversaries who adaptively construct adversarial examples that exploit flaws in the classifier’s model. Much work on adversarial examples has focused on finding small distortions to inputs that fool a classifier. Previous defenses have been both ineffective and very expensive in practice. In this talk, I’ll describe a new very simple strategy, feature squeezing, that can be used to harden classifiers by detecting adversarial examples. Feature squeezing reduces the search space available to an adversary by coalescing samples that correspond to many different inputs in the original space into a single sample. Adversarial examples can be detected by comparing the model’s predictions on the original and squeezed sample. In practice, of course, adversaries are not limited to small distortions in a particular metric space. Indeed, in security applications like malware detection it may be possible to make large changes to an input without disrupting its intended malicious behavior. I’ll report on an evolutionary framework we have developed to search for such adversarial examples that can automatically find evasive variants against state-of-the-art classifiers. This suggests that work on adversarial machine learning needs a better definition of adversarial examples, and to make progress towards understanding how classifiers and oracles perceive samples differently.