Proceedings of the 2009 Symposium on Usable Privacy and Security
Time and place:
Topics of SOUPS include, but are not limited to; innovative security or privacy functionality and design, new applications of existing models or technology, field studies of security or privacy technology, usability evaluations of security or privacy features or security testing of usability features, and lessons learned from deploying and using usable privacy and security features.
The following articles are from "Proceedings of the 2009 Symposium on Usable Privacy and Security":
Raja, Fahimeh, Hawkey, Kirstie and Beznosov, Konstantin (2009): Revealing hidden context: improving mental models of personal firewall users. In: Proceedings of the 2009 Symposium on Usable Privacy and Security 2009. p. 1. Available online
The Windows Vista personal firewall provides its diverse users with a basic interface that hides many operational details. However, concealing the impact of network context on the security state of the firewall may result in users developing an incorrect mental model of the protection provided by the firewall. We present a study of participants' mental models of Vista Firewall (VF). We investigated changes to those mental models and their understanding of the firewall's settings after working with both the VF basic interface and our prototype. Our prototype was designed to support development of a more contextually complete mental model through inclusion of network location and connection information. We found that participants produced richer mental models after using the prototype than when working with the VF basic interface; they were also significantly more accurate in their understanding of the configuration of the firewall. Based on our results, we discuss methods of improving user understanding of underlying system states by revealing hidden context, while considering the tension between complexity of the interface and security of the system.
Kobsa, Alfred, Sonawalla, Rahim, Tsudik, Gene, Uzun, Ersin and Wang, Yang (2009): Serial hook-ups: a comparative usability study of secure device pairing methods. In: Proceedings of the 2009 Symposium on Usable Privacy and Security 2009. p. 10. Available online
Secure Device Pairing is the bootstrapping of secure communication between two previously unassociated devices over a wireless channel. The human-imperceptible nature of wireless communication, lack of any prior security context, and absence of a common trust infrastructure open the door for Man-in-the-Middle (aka Evil Twin) attacks. A number of methods have been proposed to mitigate these attacks, each requiring user assistance in authenticating information exchanged over the wireless channel via some human-perceptible auxiliary channels, e.g., visual, acoustic or tactile. In this paper, we present results of the first comprehensive and comparative study of eleven notable secure device pairing methods. Usability measures include: task performance times, ratings on System Usability Scale (SUS), task completion rates, and perceived security. Study subjects were controlled for age, gender and prior experience with device pairing. We present overall results and identify problematic methods for certain classes of users as well as methods best-suited for various device configurations.
Kainda, Ronald, Flechais, Ivan and Roscoe, A. W. (2009): Usability and security of out-of-band channels in secure device pairing protocols. In: Proceedings of the 2009 Symposium on Usable Privacy and Security 2009. p. 11. Available online
Initiating and bootstrapping secure, yet low-cost, ad-hoc transactions is an important challenge that needs to be overcome if the promise of mobile and pervasive computing is to be fulfilled. For example, mobile payment applications would benefit from the ability to pair devices securely without resorting to conventional mechanisms such as shared secrets, a Public Key Infrastructure (PKI), or trusted third parties. A number of methods have been proposed for doing this based on the use of a secondary out-of-band (OOB) channel that either authenticates information passed over the normal communication channel or otherwise establishes an authenticated shared secret which can be used for subsequent secure communication. A key element of the success of these methods is dependent on the performance and effectiveness of the OOB channel, which usually depends on people performing certain critical tasks correctly. In this paper, we present the results of a comparative usability study on methods that propose using humans to implement the OOB channel and argue that most of these proposals fail to take into account factors that may seriously harm the security and usability of a protocol. Our work builds on previous research in the usability of pairing methods and the accompanying recommendations for designing user interfaces that minimise human mistakes. Our findings show that the traditional methods of comparing and typing short strings into mobile devices are still preferable despite claims that new methods are more usable and secure, and that user interface design alone is not sufficient in mitigating human mistakes in OOB channels.
Halprin, Ran and Naor, Moni (2009): Games for extracting randomness. In: Proceedings of the 2009 Symposium on Usable Privacy and Security 2009. p. 12. Available online
Randomness is a necessary ingredient in various computational tasks and especially in Cryptography, yet many existing mechanisms for obtaining randomness suffer from numerous problems. We suggest utilizing the behavior of humans while playing competitive games as an entropy source, in order to enhance the quality of the randomness in the system. This idea has two motivations: (i) results in experimental psychology indicate that humans are able to behave quite randomly when engaged in competitive games in which a mixed strategy is optimal, and (ii) people have an affection for games, and this leads to longer play yielding more entropy overall. While the resulting strings are not perfectly random, we show how to integrate such a game into a robust pseudo-random generator that enjoys backward and forward security. We construct a game suitable for randomness extraction, and test users playing patterns. The results show that in less than two minutes a human can generate 128 bits that are 2-64-close to random, even on a limited computer such as a PDA that might have no other entropy source. As proof of concept, we supply a complete working software for a robust PRG. It generates random sequences based solely on human game play, and thus does not depend on the Operating System or any external factor.
Chow, Richard, Oberst, Ian and Staddon, Jessica (2009): Sanitization's slippery slope: the design and study of a text revision assistant. In: Proceedings of the 2009 Symposium on Usable Privacy and Security 2009. p. 13. Available online
For privacy reasons, sensitive content may be revised before it is released. The revision often consists of redaction, that is, the "blacking out" of sensitive words and phrases. Redaction has the side effect of reducing the utility of the content, often so much that the content is no longer useful. Consequently, government agencies and others are increasingly exploring the revision of sensitive content as an alternative to redaction that preserves more content utility. We call this practice sanitization. In a sanitized document, names might be replaced with pseudonyms and sensitive attributes might be replaced with hypernyms. Sanitization adds to redaction the challenge of determining what words and phrases reduce the sensitivity of content. We have designed and developed a tool to assist users in sanitizing sensitive content. Our tool leverages the Web to automatically identify sensitive words and phrases and quickly evaluates revisions for sensitivity. The tool, however, does not identify all sensitive terms and mistakenly marks some innocuous terms as sensitive. This is unavoidable because of the difficulty of the underlying inference problem and is the main reason we have designed a sanitization assistant as opposed to a fully-automated tool. We have conducted a small study of our tool in which users sanitize biographies of celebrities to hide the celebrity's identity both both with and without our tool. The user study suggests that while the tool is very valuable in encouraging users to preserve content utility and can preserve privacy, this usefulness and apparent authoritativeness may lead to a "slippery slope" in which users neglect their own judgment in favor of the tool's.
Kluever, Kurt Alfred and Zanibbi, Richard (2009): Balancing usability and security in a video CAPTCHA. In: Proceedings of the 2009 Symposium on Usable Privacy and Security 2009. p. 14. Available online
We present a technique for using content-based video labeling as a CAPTCHA task. Our CAPTCHAs are generated from YouTube videos, which contain labels (tags) supplied by the person that uploaded the video. They are graded using a video's tags, as well as tags from related videos. In a user study involving 184 participants, we were able to increase the human success rate on our video
Smetters, D. K. and Good, Nathan (2009): How users use access control. In: Proceedings of the 2009 Symposium on Usable Privacy and Security 2009. p. 15. Available online
Existing technologies for file sharing differ widely in the granularity of control they give users over who can access their data; achieving finer-grained control generally requires more user effort. We want to understand what level of control users need over their data, by examining what sorts of access policies users actually create in practice. We used automated data mining techniques to examine the real-world use of access control features present in standard document sharing systems in a corporate environment as used over a long (> 10 year) time span. We find that while users rarely need to change access policies, the policies they do express are actually quite complex. We also find that users participate in larger numbers of access control and email sharing groups than measured by self-report in previous studies. We hypothesize that much of this complexity might be reduced by considering these policies as examples of simpler access control patterns. From our analysis of what access control features are used and where errors are made, we propose a set of design guidelines for access control systems themselves and the tools used to manage them, intended to increase usability and decrease error.
Sasse, M. Angela, Karat, Clare-Marie and Maxion, Roy (2009): Designing and evaluating usable security and privacy technology. In: Proceedings of the 2009 Symposium on Usable Privacy and Security 2009. p. 16. Available online
Weaver, Nicholas (2009): Think Evil (tm). In: Proceedings of the 2009 Symposium on Usable Privacy and Security 2009. p. 17. Available online
Peeters, Roel, Kohlweiss, Markulf, Preneel, Bart and Sulmon, Nicky (2009): Threshold things that think: usable authorization for resharing. In: Proceedings of the 2009 Symposium on Usable Privacy and Security 2009. p. 18. Available online
Karp, Alan, Stiegler, Marc and Close, Tyler (2009): Not one click for security?. In: Proceedings of the 2009 Symposium on Usable Privacy and Security 2009. p. 19. Available online
Besmer, Andrew, Lipford, Heather Richter, Shehab, Mohamed and Cheek, Gorrell (2009): Social applications: exploring a more secure framework. In: Proceedings of the 2009 Symposium on Usable Privacy and Security 2009. p. 2. Available online
Online social network sites, such as MySpace, Facebook and others have grown rapidly, with hundreds of millions of active users. A new feature on many sites is social applications -- applications and services written by third party developers that provide additional functionality linked to a user's profile. However, current application platforms put users at risk by permitting the disclosure of large amounts of personal information to these applications and their developers. This paper formally abstracts and defines the current access control model applied to these applications, and builds on it to create a more secure framework. We do so in the interest of preserving as much of the current architecture as possible, while seeking to provide a practical balance between security and privacy needs of the users, and the needs of the applications to access users' information. We present a user study of our interface design for setting a user-to-application policy. Our results indicate that the model and interface work for users who are more concerned with their privacy, but we still need to explore alternate means of creating policies for those who are less concerned.
Church, Luke, Anderson, Jonathan, Bonneau, Joesph and Stajano, Frank (2009): Privacy stories: confidence in privacy behaviors through end user programming. In: Proceedings of the 2009 Symposium on Usable Privacy and Security 2009. p. 20. Available online
Gao, Haichang and Liu, Xiyang (2009): A new graphical password scheme against spyware by using CAPTCHA. In: Proceedings of the 2009 Symposium on Usable Privacy and Security 2009. p. 21. Available online
Benisch, Michael, Kelley, Patrick Gage, Sadeh, Norman, Sandholm, Tuomas, Tsai, Janice, Cranor, Lorrie Faith and Drielsma, Paul Hankes (2009): The impact of expressiveness on the effectiveness of privacy mechanisms for location-sharing. In: Proceedings of the 2009 Symposium on Usable Privacy and Security 2009. p. 22. Available online
Motahari, Sara, Ziavras, Sotirios and Jones, Quentin (2009): Designing for different levels of social inference risk. In: Proceedings of the 2009 Symposium on Usable Privacy and Security 2009. p. 23. Available online
Zenebe, Azene, Tuner, Claude, Feng, Jinjuan, Lazar, Jonathan and O'Leary, Mike (2009): Integrating usability and accessibility in information assurance education. In: Proceedings of the 2009 Symposium on Usable Privacy and Security 2009. p. 24. Available online
Hayashi, Eiji, Hong, Jason and Christin, Nicolas (2009): Educated guess on graphical authentication schemes: vulnerabilities and countermeasures. In: Proceedings of the 2009 Symposium on Usable Privacy and Security 2009. p. 25. Available online
Likarish, Peter, Dunbar, Don, Hourcade, Juan Pablo and Jung, Eunjin (2009): BayeShield: conversational anti-phishing user interface. In: Proceedings of the 2009 Symposium on Usable Privacy and Security 2009. p. 26. Available online
Maetz, Yves, Onno, Stéphane and Heen, Olivier (2009): Recall-a-story, a story-telling graphical password system. In: Proceedings of the 2009 Symposium on Usable Privacy and Security 2009. p. 27. Available online
Fisler, Kathi and Krishnamurthi, Shriram (2009): Escape from the matrix: lessons from a case-study in access-control requirements. In: Proceedings of the 2009 Symposium on Usable Privacy and Security 2009. p. 28. Available online
Tsai, Janice, Egelman, Serge, Cranor, Lorrie and Acquisti, Alessandro (2009): The impact of privacy indicators on search engine browsing patterns. In: Proceedings of the 2009 Symposium on Usable Privacy and Security 2009. p. 29. Available online
PhishGuru is an embedded training system that teaches users to avoid falling for phishing attacks by delivering a training message when the user clicks on the URL in a simulated phishing email. In previous lab and real-world experiments, we validated the effectiveness of this approach. Here, we extend our previous work with a 515-participant, real-world study in which we focus on long-term retention and the effect of two training messages. We also investigate demographic factors that influence training and general phishing susceptibility. Results of this study show that (1) users trained with PhishGuru retain knowledge even after 28 days; (2) adding a second training message to reinforce the original training decreases the likelihood of people giving information to phishing websites; and (3) training does not decrease users' willingness to click on links in legitimate messages. We found no significant difference between males and females in the tendency to fall for phishing emails both before and after the training. We found that participants in the 18-25 age group were consistently more vulnerable to phishing attacks on all days of the study than older participants. Finally, our exit survey results indicate that most participants enjoyed receiving training during their normal use of email.
Bonneau, Joseph, Anderson, Jonathan and Church, Luke (2009): Privacy suites: shared privacy for social networks. In: Proceedings of the 2009 Symposium on Usable Privacy and Security 2009. p. 30. Available online
McQuaid, Michael, Zheng, Kai, Melville, Nigel and Green, Lee (2009): Usable deidentification of sensitive patient care data. In: Proceedings of the 2009 Symposium on Usable Privacy and Security 2009. p. 31. Available online
Saxena, Nitesh, Uddin, Md. Borhan and Voris, Jonathan (2009): Treat 'em like other devices: user authentication of multiple personal RFID tags. In: Proceedings of the 2009 Symposium on Usable Privacy and Security 2009. p. 34. Available online
Kay, Matthew and Terry, Michael (2009): Textured agreements: re-envisioning electronic consent. In: Proceedings of the 2009 Symposium on Usable Privacy and Security 2009. p. 35. Available online
Jaferian, Pooya, Botta, David, Hawkey, Kirstie and Beznosov, Konstantin (2009): A multi-method approach for user-centered design of identity management systems. In: Proceedings of the 2009 Symposium on Usable Privacy and Security 2009. p. 36. Available online
Lucas, Matthew and Borisov, Nikita (2009): flyByNight: mitigating the privacy risks of social networking. In: Proceedings of the 2009 Symposium on Usable Privacy and Security 2009. p. 37. Available online
Karlof, Chris, Tygar, J. D. and Wagner, David (2009): Conditioned-safe ceremonies and a user study of an application to web authentication. In: Proceedings of the 2009 Symposium on Usable Privacy and Security 2009. p. 38. Available online
Bicakci, Kemal, Yuceel, Mustafa, Erdeniz, Burak, Gurbaslar, Hakan and Atalay, Nart Bedin (2009): Graphical passwords as browser extension: implementation and usability study. In: Proceedings of the 2009 Symposium on Usable Privacy and Security 2009. p. 39. Available online
Kelley, Patrick Gage, Bresee, Joanna, Cranor, Lorrie Faith and Reeder, Robert W. (2009): A "nutrition label" for privacy. In: Proceedings of the 2009 Symposium on Usable Privacy and Security 2009. p. 4. Available online
We used an iterative design process to develop a privacy label that presents to consumers the ways organizations collect, use, and share personal information. Many surveys have shown that consumers are concerned about online privacy, yet current mechanisms to present website privacy policies have not been successful. This research addresses the present gap in the communication and understanding of privacy policies, by creating an information design that improves the visual presentation and comprehensibility of privacy policies. Drawing from nutrition, warning, and energy labeling, as well as from the effort towards creating a standardized banking privacy notification, we present our process for constructing and refining a label tuned to privacy. This paper describes our design methodology; findings from two focus groups; and accuracy, timing, and likeability results from a laboratory study with 24 participants. Our study results demonstrate that compared to existing natural language privacy policies, the proposed privacy label allows participants to find information more quickly and accurately, and provides a more enjoyable information seeking experience.
Schechter, Stuart, Brush, A. J. Bernheim and Egelman, Serge (2009): It's no secret: measuring the security and reliability of authentication via 'secret' questions. In: Proceedings of the 2009 Symposium on Usable Privacy and Security 2009. p. 40. Available online
Schechter, Stuart, Egelman, Serge and Reeder, Robert W. (2009): It's not what you know, but who you know: a social approach to last-resort authentication. In: Proceedings of the 2009 Symposium on Usable Privacy and Security 2009. p. 41. Available online
Tsai, Janice, Kelley, Patrick, Drielsma, Paul Hankes, Cranor, Lorrie, Hong, Jason and Sadeh, Norman (2009): Who's viewed you?: the impact of feedback in a mobile location-sharing application. In: Proceedings of the 2009 Symposium on Usable Privacy and Security 2009. p. 43. Available online
Hasegawa, Madoka, Christin, Nicolas and Hayashi, Eiji (2009): New directions in multisensory authentication. In: Proceedings of the 2009 Symposium on Usable Privacy and Security 2009. p. 44. Available online
Golle, Philippe (2009): Machine learning attacks against the Asirra CAPTCHA. In: Proceedings of the 2009 Symposium on Usable Privacy and Security 2009. p. 45. Available online
McDonald, Aleecia M., Reeder, Robert W., Kelley, Patrick Gage and Cranor, Lorrie Faith (2009): A comparative study of online privacy policies and formats. In: Proceedings of the 2009 Symposium on Usable Privacy and Security 2009. p. 46. Available online
Ravichandran, Ramprasad, Benisch, Michael, Kelley, Patrick Gauge and Sadeh, Norman (2009): Capturing social networking privacy preferences: can default policies help alleviate tradeoffs between expressiveness and user burden?. In: Proceedings of the 2009 Symposium on Usable Privacy and Security 2009. p. 47. Available online
Sachs, Eric (2009): Redirects to login pages are bad, or are they?. In: Proceedings of the 2009 Symposium on Usable Privacy and Security 2009. p. 48. Available online
Gillis, Nancy (2009): Short and long term research suggestions for NSF and NIST. In: Proceedings of the 2009 Symposium on Usable Privacy and Security 2009. p. 49. Available online
Goecks, Jeremy, Edwards, W. Keith and Mynatt, Elizabeth D. (2009): Challenges in supporting end-user privacy and security management with social navigation. In: Proceedings of the 2009 Symposium on Usable Privacy and Security 2009. p. 5. Available online
Social navigation is a promising approach for supporting privacy and security management. By aggregating and presenting the choices made by others, social navigation systems can provide users with easily understandable guidance on security and privacy decisions, rather than requiring that they understand low-level technical details in order to make informed decisions. We have developed two prototype systems to explore how social navigation can help users manage their privacy and security. The Acumen system employs social navigation to address a common privacy activity, managing Internet cookies, and the Bonfire system uses social navigation to help users manage their personal firewall. Our experiences with Acumen and Bonfire suggest that, despite the promise of social navigation, there are significant challenges in applying these techniques to the domains of end-user privacy and security management. Due to features of these domains, individuals may misuse community data when making decisions, leading to incorrect individual decisions, inaccurate community data, and "herding" behavior that is an example of what economists term an informational cascade. By understanding this phenomenon in these terms, we develop and present two general approaches for mitigating herding in social navigation systems that support end-user security and privacy management, mitigation via algorithms and mitigation via user interaction. Mitigation via user interaction is a novel and promising approach to mitigating cascades in social navigation systems.
Patrick, Andrew (2009): Ecological validity in studies of security and human behaviour. In: Proceedings of the 2009 Symposium on Usable Privacy and Security 2009. p. 50. Available online
Garfinkel, Simson (2009): Invisible HCI-SEC: ways of re-architecting the operating system to increase usability and security. In: Proceedings of the 2009 Symposium on Usable Privacy and Security 2009. p. 51. Available online
Zurko, Mary Ellen (2009): Technology transfer of successful usable security research into product. In: Proceedings of the 2009 Symposium on Usable Privacy and Security 2009. p. 52. Available online
Little, Linda (2009): The family and communication technologies. In: Proceedings of the 2009 Symposium on Usable Privacy and Security 2009. p. 53. Available online
Karvonen, Kristiina (2009): How does the emergence of reputation mechanisms affect the overall trust formation mechanisms, implicit and explicit, in the online environment?. In: Proceedings of the 2009 Symposium on Usable Privacy and Security 2009. p. 54. Available online
Little, Linda, Sillence, Elizabeth and Briggs, Pam (2009): Ubiquitous systems and the family: thoughts about the networked home. In: Proceedings of the 2009 Symposium on Usable Privacy and Security 2009. p. 6. Available online
Developments in ubiquitous and pervasive computing herald a future in which computation is embedded into our daily lives. Such a vision raises important questions about how people, especially families, will be able to engage with and trust such systems whilst maintaining privacy and individual boundaries. To begin to address such issues, we have recently conducted a wide reaching study eliciting trust, privacy and identity concerns about pervasive computing. Over three hundred UK citizens participated in 38 focus groups. The groups were shown Videotaped Activity Scenarios  depicting pervasive or ubiquitous computing applications in a number of contexts including shopping. The data raises a number of important issues from a family perspective in terms of access, control, responsibility, benefit and complexity. Also findings highlight the conflict between increased functionality and the subtle social interactions that sustain family bonds. We present a Pre-Concept Evaluation Tool (PRECET) for use in design and implementation of ubicomp systems.
Luca, Alexander De, Denzel, Martin and Hussmann, Heinrich (2009): Look into my eyes!: can you guess my password?. In: Proceedings of the 2009 Symposium on Usable Privacy and Security 2009. p. 7. Available online
Authentication systems for public terminals and thus public spaces have to be fast, easy and secure. Security is of utmost importance since the public setting allows manifold attacks from simple shoulder surfing to advanced manipulations of the terminals. In this work, we present EyePassShapes, an eye tracking authentication method that has been designed to meet these requirements. Instead of using standard eye tracking input methods that require precise and expensive eye trackers, EyePassShapes uses eye gestures. This input method works well with data about the relative eye movement, which is much easier to detect than the precise position of the user's gaze and works with cheaper hardware. Different evaluations on technical aspects, usability, security and memorability show that EyePassShapes can significantly increase security while being easy to use and fast at the same time.
Just, Mike and Aspinall, David (2009): Personal choice and challenge questions: a security and usability assessment. In: Proceedings of the 2009 Symposium on Usable Privacy and Security 2009. p. 8. Available online
Challenge questions are an increasingly important part of mainstream authentication solutions, yet there are few published studies concerning their usability or security. This paper reports on an experimental investigation into user-chosen questions. We collected questions from a large cohort of students, in a way that encouraged participants to give realistic data. The questions allow us to consider possible modes of attack and to judge the relative effort needed to crack a question, according to an innovative model of the knowledge of the attacker. Using this model, we found that many participants were likely to have chosen questions with low entropy answers, yet they believed that their challenge questions would resist attacks from a stranger. Though by asking multiple questions, we are able to show a marked improvement in security for most users. In a second stage of our experiment, we applied existing metrics to measure the usability of the questions and answers. Despite having youthful memories and choosing their own questions, users made errors more frequently than desirable.
Schechter, Stuart and Reeder, Robert W. (2009): 1 + 1 = you: measuring the comprehensibility of metaphors for configuring backup authentication. In: Proceedings of the 2009 Symposium on Usable Privacy and Security 2009. p. 9. Available online
Backup authentication systems verify the identity of users who are unable to perform primary authentication usually as a result of forgetting passwords. The two most common authentication mechanisms used for backup authentication by webmail services, personal authentication questions and email-based authentication, are insufficient. Many webmail users cannot benefit from email-based authentication because their webmail account is their primary email account. Personal authentication questions are frequently forgotten and prone to security failures, as illustrated by the increased scrutiny they received following their implication in the compromise of Republican vice presidential candidate Sarah Palin's Yahoo! account. One way to address the limitations of existing backup authentication mechanisms is to add new ones. Since no mechanism is completely secure, system designers must support configurations that require multiple authentication tasks be completed to authenticate. Can users comprehend such a rich set of new options? We designed two metaphors to help users comprehend which combinations of authentication tasks would be sufficient to authenticate. We performed a usability study to measure users' comprehension of these metaphors. We find that the vast majority of users comprehend screenshots that represent authentication as an exam, in which points are awarded for the completion of individual authentication tasks and authentication succeeds when an authenticatee has accumulated enough points to achieve a passing score.
Pd: Public Domain (information that is common property and contains no original authorship) Legal Code (full licence text): http://en.wikipedia.org/wiki/Public_domain
CompositeWorkWithMultipleCopyrightTerms: Work that is derived from or composed of multiple works with varying copyright terms and/or copyright holders
FairUse: Copyrighted materials that meet the legal criteria for Fair Use when used by the Interaction Design FoundationThe most common cases of Fair Use are: 1) Cover art: Cover art from various items, for identification only in the context of critical commentary of that item (not for identification without critical commentary). 2) Team and corporate logos: For identification. 3) Other promotional material: Posters, programs, billboards, ads: For critical commentary. 4) Film and television screen shots: For critical commentary and discussion of the cinema and television. 5) Screenshots from software products: For critical commentary. 6) Paintings and other works of visual art: For critical commentary, including images illustrative of a particular technique or school. 7) Images with iconic status or historical importance: As subjects of commentary. 8) Images that are themselves subject of commentary. Legal Code (full licence text): http://en.wikipedia.org/wiki/Fair_use
AllRightsReservedUsedWithoutPermission: All Rights Reserved. Non-free, copyrighted materials used without permission. The materials are used without permission of the copyright holder because the materials meet the legal criteria for Fair Use and/or because The Interaction Design Foundation has not been able to contact the copyright holder. The most common cases of Fair Use are: 1) Cover art: Cover art from various items, for identification only in the context of critical commentary of that item (not for identification without critical commentary). 2) Team and corporate logos: For identification. 3) Other promotional material: Posters, programs, billboards, ads: For critical commentary. 4) Film and television screen shots: For critical commentary and discussion of the cinema and television. 5) Screenshots from software products: For critical commentary. 6) Paintings and other works of visual art: For critical commentary, including images illustrative of a particular technique or school. 7) Images with iconic status or historical importance: As subjects of commentary. 8) Images that are themselves subject of commentary.
AllRightsReserved: All Rights Reserved. Materials used with permission. Permission to use has been granted exclusively to The Interaction Design Foundation and/or the author of the given work/chapter, in which the copyrighted material is used. This permission constitutes a non-transferable license and, as such, only applies to The Interaction Design Foundation. Therefore, no part of this material may be reproduced, stored in a retrieval system or transmitted in any form or by any means, electronic, recording or otherwise without prior written permission of the copyright holder.