Congratulations Samee Zahur!

April 30th, 2015 by David Evans

Samee Zahur won the Computer Science Outstanding Graduate Research Award, our department’s annual award for the most outstanding graduate student. Congratulations to Samee on the much-deserved award! (Unfortunately, Samee wasn’t present to receive the award since he is in Sofia presenting his half gates work at Eurocrypt 2015.)


Congratulations Dr. Zhou!

April 14th, 2015 by David Evans

Yuchen Zhou successfully defended his PhD thesis on Improving Security and Privacy of Integrated Web Applications! The dissertation was approved by his committee: Shuo Chen (Microsoft), Joanne Dugan, Kevin Sullivan, Westley Weimer, and Ronald Williams.

Congratulations to Yuchen Dr. Zhou!

Libra Link: Improving Security and Privacy of Integrated Web Applications.


Understanding and Monitoring Embedded Web Scripts

March 26th, 2015 by David Evans

Modern web applications make frequent use of third-party scripts, often in ways that allow scripts loaded from external servers to make unrestricted changes to the embedding page and access critical resources including private user information. Our paper describing tools to assist site administrators in understanding, monitoring, and restricting the behavior of third-party scripts embedded in their site, and what we’ve learned by using them, is now available: Yuchen Zhou and David Evans, Understanding and Monitoring Embedded Web Scripts, IEEE Symposium on Security and Privacy 2015.

Yuchen will present the paper at the Oakland conference (in San Jose) this May.



Project Site: ScriptInspector.org


Who Does the Autopsy?

March 25th, 2015 by David Evans

This (perhaps somewhat oversensationalized) article in Slate draws from Nate Paul’s research on medical device security: If you Die after Someone Hacks Your Glucose Monitor Who Does the Autopsy? (Slate, 13 March 2015).

According to researchers at the Oak Ridge National Laboratory, in 2003 and 2009 respectively, the “Slammer” and “Conficker” worms had each successfully infected networked hospital systems responsible for monitoring heart patients. Since the days of Slammer and Conficker, malware has since become even more sophisticated, and a Trojan with a specifically engineered piece of malicious code, could cause harm to numerous patients around the world simultaneously.

While a small community of researchers, and even some government regulators, such as the FDA and FTC, have begun to pose important questions about the privacy and security implications of incorporating computer technology into biological systems, so far law enforcement and criminal justice authorities have been mostly absent from any substantive conversations.


iDash Competition Winner

March 17th, 2015 by David Evans

Congratulations to Samee Zahur for winning the iDash Secure Genomics competition (Hamming Distance challenge task), sponsored by Human Longevity, Inc. A video of the event is available at http://www.humangenomeprivacy.org/.

Samee’s solution was built using Obliv-C, and the code will be posted soon.


Giving Web Developers Tools to Protect Their Sites and Users

February 9th, 2015 by David Evans

UVAToday has an article about Yuchen Zhou’s work on analyzing web scripts: Giving Web Developers Tools to Protect Their Sites and Users, UVAToday, 5 February 2015.

Zhou also devised a tool that developers could use to constrain the activity of the hundreds of scripts they typically embed in a website. In the process of performing analytics or placing ads, these scripts gather user-generated information on a page and send it to their servers. There is no guaranteeing, however, that these servers are secure or that the companies offering these services are trustworthy. In other cases, scripts ostensibly offering a benign service like analytics could take over the page, replacing the site-owner’s ads with their own ads or stealing private user information from the page.

Rather than try to develop rules for every page on a site, Zhou’s tool generates a set of policies that can be applied to the site as a whole. It catalogs the elements of the site based on their content and their relationship to the structure of the page and specifies which elements can be accessed by specific scripts. “We’ve found that using a white list approach – specifying which elements a script can use rather than those it can’t – is more effective because it is easier to automatically identify public elements like an ad placeholder than sensitive information in the page,” Zhou said. The site owner can then review the policies and grant permissions accordingly.


Gogo’s Fake SSL Certificates

January 15th, 2015 by David Evans

Group alumna Adrienne Felt is in the news for reporting that Gogo in-flight wifi service is generating fake SSL certificates.

From Gogo Inflight Wi-Fi Undermines Encryption, Slate FutureTense, 5 Jan 2015:

Wi-Fi on planes is unreliable and expensive. Even worse: Service from Gogo—one of the primary providers of in-flight Internet—has a lot of strange insecurities. Google engineer Adrienne Porter Felt realized during a flight on Friday that, for Google services and possibly others, Gogo was undermining encryption meant to keep pages secure.


Updates from Karsten’s BadUSB

November 18th, 2014 by David Evans

Karsten Nohl’s research on USB security is covered in The Good News and Bad News About USB Security — Only half have an unpatchable flaw, but we don’t know which half, Wired and Slate Magazine, 12 Nov 2014:

Nohl’s BadUSB attack, which he revealed at the Black Hat security conference in August, takes advantage of the fact that a USB controller chip’s firmware can be reprogrammed. That means a thumb drive’s controller chip itself, rather than the Flash storage on that memory stick, can be infected with malware that invisibly spreads to computers, corrupts files stored on the drive, or quietly begins impersonating a USB keyboard to type commands on the victim’s machine.



Nohl says that means combatting BadUSB will require device-makers to clearly label the chips their products use. “You’d never get away with this in a laptop. People would go crazy if they bought a computer and it wasn’t the chip they saw in the review they read,” he says. “It’s just these USB devices that come as black boxes.”

For the technical details, see https://opensource.srlabs.de/projects/badusb.


Hacker School Talk: What Every Hacker Should Know about Theory of Computation

October 16th, 2014 by David Evans

I had a chance earlier this week to visit and speak at Hacker School in New York. Hacker School is an amazing place, billed as a “retreat for programmers”, where a remarkable group of curious and self-motivated people from a wide range of backgrounds put their lives on hold for 12 weeks to gather with like-minded people to learn about programming and spend their evenings (and 21st birthday parties) hearing talks about computability!

Slides and notes from my talk are here: What Every Hacker Should Know about Theory of Computation.




Twitter Single Sign-On

October 7th, 2014 by David Evans

By Haina Li

Single Sign-On (SSO) services are widely used in modern web applications for authentication and social networking. While implementing a SSO service for a website is moderately straightforward, building it correctly could be tough and mistakes can lead to serious security consequences. Previous research on Facebook SSO revealed several major vulnerabilities which could easily allow an attacker to obtain sensitive user information or to take full control of a user’s account. These security weaknesses can often lead to token credentials misuse.

Because of these findings, I was motivated to discover whether SSO services implemented with other versions of OAuth would exhibit the same security vulnerabilities. I analyzed Twitter SSO, which is implemented using OAuth 1.0A instead of OAuth 2 which is used by Facebook, for similar security weaknesses. I searched for vulnerabilities by building a web application implementing Twitter SSO and by examining the traffic between the client and the server for patterns and token leakages.

Twitter SSO Protocol: Tokens

The four “tokens” which could be used by an attacker are the Consumer Key, Request Token, Verifier, and Access Token, which will be the focus of this analysis. For the rest of this post, Alice is the user and Mallory is the attacker.

Consumer Key. The Consumer Key and Secret are given to the application when it registers with Twitter. App developers are instructors to keep this key/secret pair secret and if exposed, to generate another key/secret pair.

Request Token. This token also comes in a key/secret pair. This temporary key is issued by Twitter to the application when Alice first requests to sign in with twitter. While the key is shown publicly in the URL, the secret should be kept private.

Verifier. The Verifier is a publicly shown key (in the URL) that Twitter issues to confirm that Alice has approved the application.

Access Token. This key/secret pair is a more permanent token that allows applications to access protected resources on behalf of Alice.

Twitter SSO Protocol: Overview

The diagram below provides a high-level description for Twitter SSO procedure, breaking up the protocol into three steps, each processing an authentication token.

Twitter SSO Protocol: More Details

The diagram below expands each of the three steps. Note that the items written on the arrows do not show a comprehensive list of parameters sent, but rather a few important ones (see Implementing Sign in with Twitter Overview and OAuth Core 1.0a for detailed documentation).

Security Implications

To evaluate the security of Twitter’s SSO implementation, I tried swapping all three tokens for other tokens in different combinations. I did not find any way to use these methods to gain illicit access to private, user information. There are several countermeasures associated with each token that Twitter SSO uses to prevent this type of attack.

Request Token. The Request Token must be connected to the Consumer Key and Secret and expires after the first time it is used to request an Access Token. Due to its temporary nature and correspondence with the Consumer Key and Secret, Mallory cannot exploit the system by stealing this Request Token without also knowing the Consumer Key and Secret, which should be kept secret.

Verifier. Twitter issues a different Verifier each time, thus preventing a malicious party from reusing the Verifier through an impersonation attack. Even through the Verifier is passed through the URL, like the Request Token, Mallory cannot attack the web application without also knowing the Consumer Key and Secret.

Access Token. Each Access Token is connected to a Consumer Key and Secret, which are known only to the application developer. So even if Mallory were to successfully obtain a leaked token over the traffic, she would not be able to use it to log into Alice’s account. This is verified with HuffingtonPost.com, which leaks its Access Token pair through its last message to Alice’s browser. Without this defense, Mallory could have used the leaked Access Token to issue requests to Twitter on behalf of HuffingtonPost.com and Alice.

The last precaution also proved to be useful in another serious mistake in the SSO service. Many tests found that when an application uses the Access Token to request Alice’s information or post on her behalf, Twitter does not check the Access Token Secret, as only providing the Access Token is enough to give the client permission.

This may not pose a realistic security threat because the entire Access Token key pair is typically kept secret, so in order to take advantage of this mistake, Mallory would need to find the Consumer Key and Secret as well. To make the entire system more secure, it is recommended that Twitter verify the Access Token Secret, as many keys and secrets are accidentally leaked through GitHub and even mobile applications when the developers are not careful. When Mallory has both the Access Token and Consumer Key token/secret pairs, it would be much easier to attack a web application.

Though checking the Access Token against the Consumer Key makes SSO safer to use, it has its weaknesses as well. Mobile Applications are easy to reverse engineer since the client code must be made available. If an application needs the Consumer Key to implement SSO, then there is no possible way to build a client-only secure application with SSO since the Consumer Key must be in the code. The secure approach re-routes requests through an application server, which can securely store the key and add it to the requests sent to Twitter. This adds an extra step to app building and many developers may not realize the importance in not including the Consumer Key in the application code.

A recent study from Columbia (Nicolas Viennot, Edward Garcia, and Jason Nieh. A Measurement Study of Google Play. ACM SIGMETRICS, June 2014) scanned thousands of applications in the Google Play Store for token and secret leakages, supports the conclusion that requiring a Consumer Key from the client often leads to faulty implementations. They discovered 1,477 Facebook credentials and 28,235 Twitter credentials among all the free applications when the Facebook SDK is used twice as much as the Twitter4J Library. While their explanation of the significant difference in credentials found makes sense, a more reasonable explanation is that without using a server, it is impossible to implement a Twitter SSO without including the Consumer Key in the code.

Conclusion

While Twitter’s implementation of SSO has security advantages for web applications, it is dangerous for mobile applications. There are two clear options an API developer could use to make an authentication protocol: either you require the server to verify the Consumer Key and Secret during each step of the procedure, or you don’t, hoping that Mallory does not gain access to key information. The latter option, however, leads to many vulnerabilities as clearly seen with Facebook SSO. The first, seemingly more secure, choice causes mobile applications to sacrifice the security of an OAuth protocol.