Proceedings of the 2011 Symposium on Usable Privacy and Security
Time and place:
Topics of SOUPS include, but are not limited to; innovative security or privacy functionality and design, new applications of existing models or technology, field studies of security or privacy technology, usability evaluations of security or privacy features or security testing of usability features, and lessons learned from deploying and using usable privacy and security features.
The following articles are from "Proceedings of the 2011 Symposium on Usable Privacy and Security":
Raja, Fahimeh, Hawkey, Kirstie, Hsu, Steven, Wang, Kai-Le Clement and Beznosov, Konstantin (2011): A brick wall, a locked door, and a bandit: a physical security metaphor for firewall warnings. In: Proceedings of the 2011 Symposium on Usable Privacy and Security 2011. p. 1. Available online
We used an iterative process to design firewall warnings in which the functionality of a personal firewall is visualized based on a physical security metaphor. We performed a study to determine the degree to which our proposed warnings are understandable for users, and the degree to which they convey the risks and encourage safe behavior as compared to text warnings based on those from a popular personal firewall. The evaluation results show that our warnings facilitate the comprehension of warning information, better communicate the risk, and increase the likelihood of safe behavior. Moreover, they provide participants with a better understanding of both the functionality of a personal firewall and the consequences of their actions.
Wang, Yang, Norcie, Gregory, Komanduri, Saranga, Acquisti, Alessandro, Leon, Pedro Giovanni and Cranor, Lorrie Faith (2011): "I regretted the minute I pressed share": a qualitative study of regrets on Facebook. In: Proceedings of the 2011 Symposium on Usable Privacy and Security 2011. p. 10. Available online
We investigate regrets associated with users' posts on a popular social networking site. Our findings are based on a series of interviews, user diaries, and online surveys involving 569 American Facebook users. Their regrets revolved around sensitive topics, content with strong sentiment, lies, and secrets. Our research reveals several possible causes of why users make posts that they later regret: (1) they want to be perceived in favorable ways, (2) they do not think about their reason for posting or the consequences of their posts, (3) they misjudge the culture and norms within their social circles, (4) they are in a "hot" state of high emotion when posting, or under the influence of drugs or alcohol, (5) their postings are seen by an unintended audience, (6) they do not foresee how their posts could be perceived by people within their intended audience, and (7) they misunderstand or misuse the Facebook platform. Some reported incidents had serious repercussions, such as breaking up relationships or job losses. We discuss methodological considerations in studying negative experiences associated with social networking posts, as well as ways of helping users of social networking sites avoid such regrets.
Shehab, Mohamed, Marouf, Said and Hudel, Christopher (2011): ROAuth: recommendation based open authorization. In: Proceedings of the 2011 Symposium on Usable Privacy and Security 2011. p. 11. Available online
Many major online platforms such as Facebook, Google, and Twitter, provide an open Application Programming Interface which allows third party applications to access user resources. The Open Authorization protocol (OAuth) was introduced as a secure and efficient method for authorizing third party applications without releasing a user's access credentials. However, OAuth implementations don't provide the necessary fine-grained access control, nor any recommendations vis-a-vis which access control decisions are most appropriate. We propose an extension to the OAuth 2.0 authorization that enables the provisioning of fine-grained authorization recommendations to users when granting permissions to third party applications. We propose a mechanism that computes permission ratings based on a multi-criteria recommendation model which utilizes previous user decisions, and application requests to enhance the privacy of the overall site's user population. We implemented our proposed OAuth extension as a browser extension that allows users to easily configure their privacy settings at application installation time, provides recommendations on requested privacy attributes, and collects data regarding user decisions. Experiments on the collected data indicate that the proposed framework efficiently enhanced the user awareness and privacy related to third party application authorizations.
King, Jennifer, Lampinen, Airi and Smolen, Alex (2011): Privacy: is there an app for that?. In: Proceedings of the 2011 Symposium on Usable Privacy and Security 2011. p. 12. Available online
Users of social networking sites (SNSs) increasingly must learn to negotiate privacy online with multiple service providers. Facebook's third-party applications (apps) add an additional layer of complexity and confusion for users seeking to understand and manage their privacy. We conducted a novel exploratory survey (conducted on Facebook as a Platform app) to measure how Facebook app users interact with apps, what they understand about how apps access and exchange their profile information, and how these factors relate to their privacy concerns. In our analysis, we paid special attention to our most knowledgeable respondents: given their expertise, would they differ in behaviors or attitudes from less knowledgeable respondents? We found that misunderstandings and confusion abound about how apps function and how they manage profile data. Against our expectations, knowledge or behavior weren't consistent predictors of privacy concerns with third-party apps or on SNSs in general. Instead, whether or not the respondent experienced an adverse privacy event on a social networking site was a reliable predictor of privacy attitudes.
Ion, Iulia, Sachdeva, Niharika, Kumaraguru, Ponnurangam and Čapkun, Srdjan (2011): Home is safer than the cloud!: privacy concerns for consumer cloud storage. In: Proceedings of the 2011 Symposium on Usable Privacy and Security 2011. p. 13. Available online
Several studies ranked security and privacy to be major areas of concern and impediments of cloud adoption for companies, but none have looked into end-users' attitudes and practices. Not much is known about consumers' privacy beliefs and expectations for cloud storage, such as web-mail, document and photo sharing platforms, or about users' awareness of contractual terms and conditions. We conducted 36 in-depth interviews in Switzerland and India (two countries with different privacy perceptions and expectations); and followed up with an online survey with 402 participants in both countries. We study users' privacy attitudes and beliefs regarding their use of cloud storage systems. Our results show that privacy requirements for consumer cloud storage differ from those of companies. Users are less concerned about some issues, such as guaranteed deletion of data, country of storage and storage outsourcing, but are uncertain about using cloud storage. Our results further show that end-users consider the Internet intrinsically insecure and prefer local storage for sensitive data over cloud storage. However, users desire better security and are ready to pay for services that provide strong privacy guarantees. Participants had misconceptions about the rights and guarantees their cloud storage providers offers. For example, users believed that their provider is liable in case of data loss, does not have the right to view and modify user data, and cannot disable user accounts. Finally, our results show that cultural differences greatly influence user attitudes and beliefs, such as their willingness to store sensitive data in the cloud and their acceptance that law enforcement agencies monitor user accounts. We believe that these observations can help in improving users privacy in cloud storage systems.
Schlegel, Roman, Kapadia, Apu and Lee, Adam J. (2011): Eyeing your exposure: quantifying and controlling information sharing for improved privacy. In: Proceedings of the 2011 Symposium on Usable Privacy and Security 2011. p. 14. Available online
A large body of research has focused on disclosure policies for controlling information release in social sharing (e.g., location-based) applications. However, less work has considered how exposed these policies actually leave users; i.e., to what extent are disclosures in compliance with these policies actually being made? For instance, consider a disclosure policy granting Alice's coworkers access to her location during work hours. Alice might feel that this policy appropriately controls her exposure, but may feel differently if she learned that her boss was accessing her location every 5 minutes. In addition to specifying who has access to personal information, users need a way to quantify, interpret, and control the extent to which this data is shared. We propose and evaluate an intuitive mechanism for summarizing and controlling a user's exposure on smartphone-based platforms. Our approach uses the visual metaphor of eyes appearing and growing in size on the home screen; the rate at which these eyes grow depends on the number of accesses granted for a user's location, and the type of person (e.g., family vs. friend) making these accesses. This approach gives users an accurate and ambient sense of their exposure and helps them take actions to limit their exposure, all without explicitly identifying the social contacts making requests. Through two systematic user studies (N = 43,41) we show that our interface is indeed effective at summarizing complex exposure information and provides comparable information to a more cumbersome interface presenting more detailed information.
Braunstein, Alex, Granka, Laura and Staddon, Jessica (2011): Indirect content privacy surveys: measuring privacy without asking about it. In: Proceedings of the 2011 Symposium on Usable Privacy and Security 2011. p. 15. Available online
The strong emotional reaction elicited by privacy issues is well documented (e.g., [12, 8]). The emotional aspect of privacy makes it difficult to evaluate privacy concern, and directly asking about a privacy issue may result in an emotional reaction and a biased response. This effect may be partly responsible for the dramatic privacy concern ratings coming from recent surveys, ratings that often seem to be at odds with user behavior. In this paper we propose indirect techniques for measuring content privacy concerns through surveys, thus hopefully diminishing any emotional response. We present a design for indirect surveys and test the design's use as (1) a means to measure relative privacy concerns across content types, (2) a tool for predicting unwillingness to share content (a possible indicator of privacy concern), and (3) a gauge for two underlying dimensions of privacy -- content importance and the willingness to share content. Our evaluation consists of 3 surveys, taken by 200 users each, in which privacy is never asked about directly, but privacy warnings are issued with increasing escalation in the instructions and individual question-wording. We demonstrate that this escalation results in statistically and practically significant differences in responses to individual questions. In addition, we compare results against a direct privacy survey and show that rankings of privacy concerns are increasingly preserved as privacy language increases in the indirect surveys, thus indicating our mapping of the indirect questions to privacy ratings is accurately reflecting privacy concerns.
Maurer, Max-Emanuel, Luca, Alexander De and Kempe, Sylvia (2011): Using data type based security alert dialogs to raise online security awareness. In: Proceedings of the 2011 Symposium on Usable Privacy and Security 2011. p. 2. Available online
When browsing the Internet, users are likely to be exposed to security and privacy threats -- like fraudulent websites. Automatic browser mechanisms can protect them only to some extent. In other situations it is still important to raise the users' security awareness at the right moment. Passive indicators are mostly overlooked and blocking warnings are quickly dismissed by habituated users. In this work, we present a new concept of warnings that appear in-context, right next to data the user has just entered. Those dialogs are displayed whenever critical data types -- e.g. credit card data -- are entered by the users into online forms. Since they do not immediately interrupt the users' interaction but appear right in the users' focus, it is possible to place important security information in a way that it can be easily seen. We implemented the concept as a Firefox plugin and evaluated it in a row of studies including two lab studies, one focus group and one real world study. Results show that the concept is very well accepted by the users and that with the plugin, especially non-expert participants were more likely to identify fraudulent (or phishing) websites than using the standard browser warnings. Besides this, we were able to gather interesting findings on warning usage.
Sotirakopoulos, Andreas, Hawkey, Kirstie and Beznosov, Konstantin (2011): On the challenges in usable security lab studies: lessons learned from replicating a study on SSL warnings. In: Proceedings of the 2011 Symposium on Usable Privacy and Security 2011. p. 3. Available online
We replicated and extended a 2008 study conducted at CMU that investigated the effectiveness of SSL warnings. We adjusted the experimental design to mitigate some of the limitations of that prior study; adjustments include allowing participants to use their web browser of choice and recruiting a more representative user sample. However, during our study we observed a strong disparity between our participants actions during the laboratory tasks and their self-reported "would be" actions during similar tasks in everyday computer practices. Our participants attributed this disparity to the laboratory environment and the security it offered. In this paper we discuss our results and how the introduced changes to the initial study design may have affected them. Also, we discuss the challenges of observing natural behavior in a study environment, as well as the challenges of replicating previous studies given the rapid changes in web technology. We also propose alternatives to traditional laboratory study methodologies that can be considered by the usable security research community when investigating research questions involving sensitive data where trust may influence behavior.
Sun, San-Tsai, Pospisil, Eric, Muslukhov, Ildar, Dindar, Nuray, Hawkey, Kirstie and Beznosov, Konstantin (2011): What makes users refuse web single sign-on?: an empirical investigation of OpenID. In: Proceedings of the 2011 Symposium on Usable Privacy and Security 2011. p. 4. Available online
OpenID is an open and promising Web single sign-on (SSO) solution. This work investigates the challenges and concerns web users face when using OpenID for authentication, and identifies what changes in the login flow could improve the users' experience and adoption incentives. We found our participants had several behaviors, concerns, and misconceptions that hinder the OpenID adoption process: (1) their existing password management strategies reduce the perceived
Perković, Toni, Li, Shujun, Mumtaz, Asma, Khayam, Syed Ali, Javed, Yousra and Čagalj, Mario (2011): Breaking undercover: exploiting design flaws and nonuniform human behavior. In: Proceedings of the 2011 Symposium on Usable Privacy and Security 2011. p. 5. Available online
This paper reports two attacks on Undercover, a human authentication scheme against passive observers proposed at CHI 2008. The first attack exploits nonuniform human behavior in responding to authentication challenges and the second one is based on information leaked from authentication challenges or responses visible to the attacker. The second attack can be generalized to break two alternative Undercover designs presented at Pervasive 2009. All the attacks exploit design flaws of the Undercover implementations. Theoretical and experimental analyses show that both attacks can reveal the user's password with high probability with O(10) observed login sessions. Both attacks were verified by using the login data collected in a user study with 28 participants. We also propose some enhancements to make Undercover secure against the attacks reported in this paper. Our research in breaking and improving Undercover leads to two broader implications. First, it reemphasizes the principle of "devil is in details" for the design of security-related human-computer interface. Secondly, it reveals a subtle relationship between security and usability: human users may behave in an insecure way to compromise the security of a system. To design a secure human-computer interface, designers should pay special attention to possible negative influence of any detail of the interface including how human users interact with the system.
Zakaria, Nur Haryani, Griffiths, David, Brostoff, Sacha and Yan, Jeff (2011): Shoulder surfing defence for recall-based graphical passwords. In: Proceedings of the 2011 Symposium on Usable Privacy and Security 2011. p. 6. Available online
Graphical passwords are often considered prone to shoulder-surfing attacks, where attackers can steal a user's password by peeking over his or her shoulder in the authentication process. In this paper, we explore shoulder surfing defence for recall-based graphical password systems such as Draw-A-Secret and Background Draw-A-Secret, where users doodle their passwords (i.e. secrets) on a drawing grid. We propose three innovative shoulder surfing defence techniques, and conduct two separate controlled laboratory experiments to evaluate both security and usability perspectives of the proposed techniques. One technique was expected to work to some extent theoretically, but it turned out to provide little protection. One technique provided the best overall shoulder surfing defence, but also caused some usability challenges. The other technique achieved reasonable shoulder surfing defence and good usability simultaneously, a good balance which the two other techniques did not achieve. Our results appear to be also relevant to other graphical password systems such as Pass-Go.
The usability of IT security management (ITSM) tools is hard to evaluate by regular methods, making heuristic evaluation attractive. However, standard usability heuristics are hard to apply as IT security management occurs within a complex and collaborative context that involves diverse stakeholders. We propose a set of ITSM usability heuristics that are based on activity theory, are supported by prior research, and consider the complex and cooperative nature of security management. In a between-subjects study, we compared the employment of the ITSM and Nielsen's heuristics for evaluation of a commercial identity management system. Participants who used the ITSM set found more problems categorized as severe than those who used Nielsen's. As evaluators identified different types of problems with the two sets of heuristics, we recommend employing both the ITSM and Nielsen's heuristics during evaluation of ITSM tools.
Liu, Gang, Xiang, Guang, Pendleton, Bryan A., Hong, Jason I. and Liu, Wenyin (2011): Smartening the crowds: computational techniques for improving human verification to fight phishing scams. In: Proceedings of the 2011 Symposium on Usable Privacy and Security 2011. p. 8. Available online
Phishing is an ongoing kind of semantic attack that tricks victims into inadvertently sharing sensitive information. In this paper, we explore novel techniques for combating the phishing problem using computational techniques to improve human effort. Using tasks posted to the Amazon Mechanical Turk human effort market, we measure the accuracy of minimally trained humans in identifying potential phish, and consider methods for best taking advantage of individual contributions. Furthermore, we present our experiments using clustering techniques and vote weighting to improve the results of human effort in fighting phishing. We found that these techniques could increase coverage over and were significantly faster than existing blacklists used today.
In mobile and pervasive computing environments, users may easily exchange information via ubiquitously available computers ranging from sensors, embedded processors, wearable and handheld devices, to servers. The unprecedented level of interaction between users and intelligent environments poses unparalleled privacy challenges. We identify a new attack that can be used to acquire users' private information -- using reciprocity norms. By mutually exchanging information with users, an attacker may use a psychological method, the norm of reciprocity, to acquire users' private information. We implemented software to provide a rich shopping experience in a mobile and pervasive computing environment and embedded the reciprocity attack. Our experiments showed that participants were more willing to provide some types of private information under reciprocity attacks. To the best of our knowledge, this is the first attempt to understand the impact of the norm of reciprocity as an attack in mobile and pervasive computing environments. These human factors should be taken into consideration when designing security measures to protect people's privacy.