David A. Siegel


Personal Homepage
Dray & Associates, Inc. (http://www.dray.com)

David is a well-known user experience researcher and consultant, and co-owner of Dray & Associates, Inc. He specializes in using field user studies and contextual research, and naturalistic usability evaluation to help guide product concept, strategy, and interaction design. He has published and taught on a variety of user-centered design topics, including many workshops and tutorials at professional conferences in the U.S., Europe, and Africa. He is the author of a tutorial on analysis of qualitative data, which has been offered at many international conferences. He has also taught courses on Qualitative Field Research Methods, International Usability Research, and International User Research. Together with Dr. Susan Dray, he edited the Business Column in ACM's magazine, interactions for many years, and currently edits its Evaluation and Usability Forum. He has taught a graduate seminar on Qualitative Field Research for User-Centered Design, offered through a program of the Natural Sciences and Engineering Research Council of Canada. He obtained his Bachelor's degree from Princeton University and his Ph.D. in Psychology from UCLA.

Publication Statistics

Publication period start
Publication period end
Number of co-authors

Number of publications with favourite co-authors

Productive Colleagues
Most productive colleagues in number of publications


Siegel, David A., Dray, Susan M. (2011): A Professional Empiricist Manifesto. In Interactions, 18 (4) pp. 82-87.

Siegel, David A. (2010): The mystique of numbers: belief in quantitative approaches to segmentation and persona dev. In: Proceedings of ACM CHI 2010 Conference on Human Factors in Computing Systems , 2010, . pp. 4721-4732. http://doi.acm.org/10.1145/1753846.1754221

Siegel, David A., Dray, Susan M. (2007): Contextual User Research for International Software Design. In: Aykin, Nuray M. (eds.) UI-HCII 2007 - Second International Conference on Usability and Internationalization - Part II July 22-27, 2007, Beijing, China. pp. 266-273. http://dx.doi.org/10.1007/978-3-540-73289-1_32

Dray, Susan M., Siegel, David A. (2007): Understanding Users In Context: An In-Depth Introduction to Fieldwork for User Centered De. In: Baranauskas, Maria Cecília Calani, Palanque, Philippe A., Abascal, Julio, Barbosa, Simone Diniz Junqueira (eds.) DEGAS 2007 - Proceedings of the 1st International Workshop on Design and Evaluation of e-Government Applications and Services September 11th, 2007, Rio de Janeiro, Brazil. pp. 712-713. http://dx.doi.org/10.1007/978-3-540-74800-7_96

Dray, Susan M., Siegel, David A. (2007): Dealing with the Challenges of Interpreting International User Research. In: Stephanidis, Constantine (eds.) UAHCI 2007 - 4th International Conference on Universal Access in Human-Computer Interaction - Part 1 July 22-27, 2007, Beijing, China. pp. 75-81. http://dx.doi.org/10.1007/978-3-540-73279-2_9

Siegel, David A., Reid, Bill, Dray, Susan M. (2006): IT security: protecting organizations in spite of themselves. In Interactions, 13 (3) pp. 20-27.

Dray, Susan M., Karat, Clare-Marie, Rosenberg, Daniel, Siegel, David A., Wixon, Dennis (2005): Is ROI an effective approach for persuading decision-makers of the value of user-centered . In: Proceedings of ACM CHI 2005 Conference on Human Factors in Computing Systems , 2005, . pp. 1168-1169. http://doi.acm.org/10.1145/1056808.1056865

Siegel, David A., Dray, Susan M. (2005): Avoiding the next schism: ethnography and usability. In Interactions, 12 (2) pp. 58-61.

Dray, Susan M., Siegel, David A. (2004): Remote possibilities?: international usability testing at a distance. In Interactions, 11 (2) pp. 10-17. http://doi.acm.org/10.1145/971258.971264

Siegel, David A., Dray, Susan M. (2003): Living on the edges: user-centered design and the dynamics of specialization in organizati. In Interactions, 10 (5) pp. 18-27.

Siegel, David A. (2003): The business case for user-centered design: increasing your power of persuasion. In Interactions, 10 (3) pp. 30-36.

Dray, Susan M., Siegel, David A. (2003): Addressing the Digital Divide through User-Centered Design. In: Evers, Vanessa, Rose, Kerstin, Honold, Pia, Coronado, José, Day, Donald L. (eds.) Designing for Global Markets 5 - IWIPS 2003 - Fifth International Workshop on Internationalisation of Products and Systems 17-19 July, 2003, Berlin, Germany. pp. 177-188.

Dray, Susan M., Siegel, David A., Feldman, E., Potenza, M. (2002): Why do version 1.0 and not release it?: Conducting field trials of the tablet PC. In Interactions, 9 (2) pp. 11-16.

Siegel, David A., Rouchka, Tracy (2002): Demo-driven design or design-driven demos: vaporware, demos, and prototypes. In Interactions, 9 (4) pp. 25-30.

Siegel, David A. (2001): Business: New kid on the block: marketing organizations and interaction design. In Interactions, 8 (2) pp. 19-23. http://doi.acm.org/10.1145/361897.361908 Citation

Dray, Susan M., Siegel, David A. (2001): The past recaptured: in search of lost paradigms. In Behaviour and Information Technology, 20 (5) pp. 315-321.

Dray, Susan M., Siegel, David A. (1999): Business: penny-wise, pound-wise: making smart trade-offs in planning usability studies. In Interactions, 6 (3) pp. 25-30. http://www.acm.org/pubs/articles/journals/interactions/1999-6-3/p25-dray/p25-dray.pdf

Dray, Susan M., Siegel, David A. (1998): Business: User-Centered Design and the "Vision Thing". In Interactions, 5 (2) pp. 16-20. http://www.acm.org/pubs/articles/journals/interactions/1998-5-2/p16-dray/p16-dray.pdf

Siegel, David A., Dray, Susan M. (2011): International Contextual Field Research. In: "Global Usability" Springer .

Siegel, David A., Dray, Susan M. (2005): Making the Business Case for International User Centered Design. In: "Cost-Justifying Usability: An Update for the Internet Age" Morgan Kaufmann .

Siegel, David A.

15.10 Commentary by David A. Siegel

I appreciate the opportunity to comment on Gilbert Cockton’s chapter on usability. My comments come from the perspective of someone who has practiced user experience (UX) research of many types as a consultant. Although I have done my share of usability evaluations, almost all of my work currently consists of in vivo contextual research, with a focus on finding ways to increase value to the user. The product teams I work with often include internal usability specialists, and I well understand their roles within their teams and the challenges they face. Finally, my prior career as a psychologist has given me a very healthy respect for the difficulties of measuring and understanding human behavior in a meaningful way, and impatience with people who gloss over these challenges. 

To begin with points of agreement, I applaud Gilbert’s emphasis on the need to consider usability in the context of all other factors that influence the value people obtain from interactive products.  I also agree with his critique of the methodological limitations of laboratory usability evaluation. I could not agree more that contextual research is usually much more powerful than laboratory usability evaluation as an approach to understanding the user experience holistically and to gaining insights that will drive UX design towards greater overall value. I also agree with Gilbert’s call to usability professionals to focus on the larger issues.

With this said, however, I have a number of concerns about the chapter's portrayal and critique of usability as an inherently limited, marginal contributor to development of great products. In regard to practice, there are many gradations of skill and wisdom, and some unknown proportion of usability practitioners may deserve to be confronted with the criticisms Gilbert raises. However, I question the idea that these criticisms are true of usability practice in principle. I believe that most mature usability practitioners are aware of the issues he raises, would agree with many of his points, and work hard to address them in various ways.  In the discussion that follows, I will present an alternate view of usability’s role as a fundamental contributor to product value. This requires considering usability at two levels: as an abstract concept and as a field of practice.

First, one note on terminology: throughout this commentary I use the word “product” to refer to anything that is being designed for interactive use, be it software, website, system, or device, or any new features of these.

15.10.1 Usability and Value as Abstract Constructs

It has become commonplace to emphasize a distinction between usability and value, and also to claim that “experience” has superseded usability. This treats usability as though it is distinct from both of these other concepts. Even though usability is generally acknowledged to be important, it is portrayed as quite subordinate. In Gilbert’s chapter, this is reflected in the idea that usability is merely a “hygiene factor,” the absence of which can block the delivery of value or reduce it by adding to costs, but one which can never go beyond neutral as a contributor to value. In my view, this greatly understates the contribution of usability to value. The two concepts are far more intertwined than this Attempts to abstract value from usability are just as flawed as the reverse.

The notion that ease of use is a separate issue from value, although one that affects it, has much face validity. It seems to make sense to think of value as a function of benefit somehow related inversely with costs, with usability problems counted in the costs column. Unfortunately, this is consistent with the notion of usability as “a feature,” something that makes usability professionals cringe, just as the idea of design as the “lipstick” applied to a product in the last stage makes designers cringe. In my view, usability divorced from value is as undefined as the sound of one hand clapping. Usability can only be defined in the context of benefit. By this I do not mean benefit in principle, but rather the benefit anticipated by or experienced by the user. At one level, this is because usability and experienced benefit interact in complex ways. But beyond this, there are many products where usability is itself the primary value proposition. In fact, the central value proposition of most technological tools is that they make something of value easier to achieve than it used to be. A mobile phone has value because its portability enables communication while mobile, and its portability matters because it makes it more usable when mobile.

In another example, a large medical organization I am familiar with recently adopted a new, integrated digital medical record system. Initially, there was a great deal of grumbling about how complex and confusing it was. I saw the classic evidence of problems in the form of notes stuck on computer monitors warning people not to do seemingly intuitive things and reminding them of the convoluted workarounds. However, more recently, I have heard nurses make comments about the benefit of the system. Doctors’ orders are entered electronically and made automatically available to the appropriate departments. As a result, patients now can come to the clinic for a follow up laboratory test without having to remember to bring a written copy of the lab order.  “Usability” is not simply the issue of whether doctors can figure out how to enter the order in the system and direct it to the lab rather than the ophthalmology department, although that is part of it. The benefit has to do with its overall success in reducing the usability problems of an earlier process that used to be difficult to coordinate and error prone, and this increase in usability only matters because it is delivering a real benefit.

Figure 15.1

Sometimes, usability seems detached from value when the goal is fulfilled at the end of a sequence of steps, but the steps along the way are confusing.  However, it can be the separation from the experience of value that creates the usability problem. For example, if people trying to book an online hotel reservation get lost in preliminary screens where they first have to create an account, we might see usability as only relevant to the cognitive aspects of the sign up process, and as mere hygiene factors. But when users become disoriented because they do not understand what a preliminary process has to do with their goal, it can be precisely because they cannot see the value of the preliminary steps. That is, they can’t see how the steps contribute to something they care about, and lead them towards their goal. If they did, the subparts of the process would both be more understandable and would acquire value of their own, just as a well-designed hammer gains value not simply in its own right, but because it is understood as a more effective tool for driving nails (which are valued because of the value of the carpentry tasks they enable, and so on.) This is simultaneously a usability problem and an “experience of value problem.” For this reason, a common challenge of usability is to convey to users that they are making progress towards an outcome that they value.

For example, in one product that I worked on, users were offered the opportunity to enroll for health insurance benefits that claimed to be highly personalized. In addition to setting different benefit levels for different members of their families, users could compose their own preferred networks of medical specialists, for which they would receive the highest reimbursement levels. Unfortunately, the actual user experience did not appear to live up to this. As soon as the user entered identifying information, the system applied defaults to all the decisions that the user was supposedly able to personalize. It only fulfilled its value proposition of personalization by allowing the user to “edit” the final configuration—13 screens into the process. Along the way, the user experienced the sense that decisions were being imposed. There was not even an indication to the user that the opportunity to make personal choices was coming eventually. Unfortunately, the system did not start by asking the user which choices mattered to them and what their preferences were, so it could factor this things in before presenting a result to the user.

How should we construe this? As a usability problem? As a problem in delivery of value? As a failure in the design of a user experience? It is all of these at the same time. The discrepancy from the expected perception of value is a primary cause of the confusion users felt. None of these constructs (usability, value, experience) can be defined without incorporating the others. If we parse and remove the meaning that we can attribute to any of them, we drain the meaning from the others. Disputes about which is the legitimate language to describe them are at best just ways to emphasize different faces of the same phenomenon, and at worst semantic quibbling. This means that usability is something more than just another item to add into the costs column when we weigh them against benefits to arrive at value. It also means we can’t answer the question of whether something is usable without also answering the question, “What matters?”

15.10.2 Usability Practice in Product Development

While Gilbert and I may agree on the need for a more holistic focus on user experience, we may disagree about whether usability in practice actually takes this holistic view. Reducing the profession to a particular type of laboratory evaluation makes it seem limited and can raise questions about its relevance. While as I said, I agree with Gilbert’s critique of the methodological limitations of this approach, the profession is far broader and more diverse than this. Furthermore, even despite its limitations, traditional usability evaluation often contributes significant value in the product development context, at least when practiced by reflective professionals.  Below, I comment on some of the major issues Gilbert raises with regard to usability practice. Is 'Ease of Use' still relevant?

Although some interaction design patterns have become established, and an increasing number of users have gained generalizable skills in learning a variety of new interaction patterns, this does not mean that ease of use as an issue has gone away or even declined in importance. For several reasons, it makes more sense to see the spectrum of usability issues to be addressed as having evolved. First, the spectrum of users remains very large and is constantly expanding, and there are always some at an entry level. Second, although with experience users may gain knowledge that is transferrable from one family of products to another, this can be both an asset and a source of confusion, because the analogies among product designs are never perfect. Third, as innovation continues to create new products with new capabilities, the leading edge of UX keeps moving forward. On that leading edge, there are always new sets of design challenges, approaches, and tradeoffs to consider. Finally, the world does not consist only of products intended to create experiences for their own sake as opposed to those that support tasks (a distinction that is not necessarily so clear). Products that are designed to facilitate and manage goal-oriented tasks and to support productivity continue to have a tremendous impact on human life, and we have certainly not learned to optimize ease of interaction with them. Finally, usability is continually driven forward by competition within a product domain.

Another claim in the chapter that suggests limited relevance for usability is that good product teams do not need a dedicated usability person. This is too simplistic. Of course, a designated usability person does not create usability single handedly. That is the cumulative result of everything that goes into the product. However, how much specialized work there is for a usability person depends on many factors. We need to take into account the variability among ways that product teams can be structured, the magnitude of the UX design challenges they face in their product space, the complexity of the product or family of inter-related products that the usability person supports, how incremental versus innovative the products are, what the risk tolerance is for usability problems, how heterogeneous the user population and user contexts are, how much user persistence is needed for usage to be reinforced and sustained by experiences of value, etc. The simplistic statement certainly does not address the fact that some usability work takes more effort to carry out than others. To do realistic research with consumers is generally much easier than doing realistic research inside enterprises.

As a matter of fact, in actual practice teams often do not have usability professionals assigned to them full time, because these people often support multiple product teams, in a matrix organizational structure. There are benefits to this in terms of distributing a limited resource around the company. But there are also drawbacks. This structure often contributes to the usability person being inundated with requests to evaluate superficial aspects of design. It can also exclude the usability person from integrative discussions that lead to fundamental aspects of product definition and design and determine the core intendedvalue of the product. Some usability people may accept this limited role complacently and passively respond to team requests, in the hopes of providing “good service,” but many others recognize the challenges of this role structure and work very hard to get involved with deeper issues of value, exactly as Gilbert urges them to. Do usability professionals only focus on cognition?

Several points in Gilbert’s critique of practice are based on a limited view of what usability people do. It is true that laboratory usability evaluation typically does try to isolate cognitive factors by treating the users goals and motivation as givens, rather than attempting to discover them. Often, it is the fit of the assumed goal that is in question, and that makes the biggest difference in user experience.

But many usability professionals spend a great deal of time doing things other than laboratory tests, including, increasingly, fundamental in context user research. For many years, usability evaluation has served as a platform to promote systematic attention to deeper issues of value to the user. Many usability professionals deeply understand the complex, entangled relationship between ease of use and value, and work to focus on broad questions of how technology can deliver experienced value. Some usability people have succeeded in getting involved earlier in the design process when they can contribute to deeper levels of decision-making. This has led to their involvement in answering questions about value, like “What will matter to the user?” or “What will influence whether people will really adopt it?” rather than only asking, “Could the user, in principle, figure out how to do it if they wanted to?” There are certainly people who are narrow specialists in a particular set of techniques focused on ease of use, but they do not own the definition of the field, and specialization per se is not bad. What can usability people contribute?

Gilbert is correct that UX skills are increasingly distributed across roles. He lists a number of such skills, but missing from the list is the skill of doing disciplined research to evaluate evidence for the assumptions, claims, beliefs, or proposed designs of the product team, whether these are claims about what people need and will value, or whether a particular interface design will enable efficient performance.

Gilbert points out that there is no cookbook of infallible usability approaches.  This is not a surprise, and indeed, we should never have expected such a thing. Such cookbooks do not exist for any complex field, and there is no way to guarantee that a practical measurement approach captures the core meaning of a complex construct. I do agree wholeheartedly with Gilbert when he points out the many factors that can complicate the process of interpreting usability findings due to this lack of a cookbook of infallible methods and the presence of many confounds. These issues argue for the need for greater professionalism among usability practitioners, not for the downgrading of the profession or marginalizing it on the periphery of the product development team. Professionalism requires that practitioners have expert understanding of the limitations of methods, expertise in modifying them to address different challenges, the dedication to continually advance their own processes, and the skill to help drive the evolution of practice over time. At a basic level, mature usability professionals recognize that results from a single evaluation do not give an absolute measure of overall usability. They are careful about overgeneralizing. They at least attempt to construct tasks that they expect users will care about, and attempt to recruit users who feel will engage realistically with the tasks. They wrestle with how best to achieve these things given the constraints they work under. Those who do not recognize the challenges of validity, or who apply techniques uncritically are certainly open to criticism, or should be considered mere technicians, but, again, they do not represent the best of usability practice.

In the absence of scientific certainty, where is the value of usability practice? In the product development context, this should not be judged by how well usability meets criteria of scientific rigor. It is more relevant to ask how it compares to and compliments other types of evidence that are used as a basis for product definition, audience targeting, functional specification, and design decisions. This means we need to consider usability’s role within the social and political processes of product development.

Membership in product teams often requires allegiance to the product concept and design approach. Sometimes, demonstrations of enthusiasm are a pre-requisite for hiring. Often, it is risky for team members to challenge the particular compromises that have been made previously to adapt the product to various constraints or a design direction that has become established, since these all have vested interests behind them. In this context, the fact that usability methods (or approaches as Gilbert rightfully calls them) are scientifically flawed does not mean they are without value. It is not as though all the other streams of influence that affect product development are based on solid science while usability is voodoo. When you consider the forces that drive product development, it is clear that subjective factors dominate many of them, for example:

  • Follow the leader design mentality
  • Imperfect and sometimes cherry-picked market research data
  • Internal politics
  • Impressions derived from other faulty approaches to testing the product’s appeal
  • Beliefs about who future users will be

Product decisions are also deeply influenced by legitimate considerations that are difficult to evaluate objectively, much less to weigh against each other, such as:

  • How to deal with legacy issues and the difficulty of innovating with an installed base
  • How to weigh engineering feasibility and manufacturing cost against other considerations
  • Market timing
  • The business need to differentiate from the competition, even if the competition has a better approach
  • The need to promote the inherent advantages of your technical strengths and downplay its limitations

In this context, a discipline that offers structured and transparent processes for introducing evidence-based critical thinking into the mix adds value, even though its methods are imperfect and its evidence open to interpretation. Sometimes, usability evaluation is a persuasive tool to get product teams to prioritize addressing serious problems that everyone knew existed, but that could not receive focus earlier. Sometimes this is needed to counterbalance the persuasive techniques of other disciplines, which may have less scientific basis than usability.  Sometimes usability results provide a basis to resolve disputes that have no perfect answer and that have previously paralyzed teams.  And sometimes they have the effect of triggering discussions about controversial things that would otherwise have been suppressed. Does usability contribute to innovation?

Sometimes, usability in practice is portrayed as a mere quality assurance process, or as Gilbert says, a hygiene factor. It is often equated with evaluation as distinct from discovery and idea generation. In many ways, this is a false distinction. Careful evaluation of what exists now can inspire invention and direct creativity towards things that will make the most difference. Practices like rapid iterative design reflect efforts to integrate evaluation and invention. Practices that are considered to be both discovery and invention processes, like contextual design, fall on a continuum with formative usability evaluation and naturalistic evaluation in the usage context. Of course, usability professionals differ in their skills for imagining new ways of meeting human needs, envisioning new forms of interactive experience, or even generating multiple alternative solutions to an information architecture problem or interface design problem. Some may lack these skills. However, the practice of usability is clearly enhanced by them. Those who can integrate evaluation and invention can add more value to the product development process and can help ensure usability/value in the ultimate product.

15.10.3 Conclusion

Certainly one can find examples of bad usability practice, and I cannot judge what other people may have encountered. Of course, there is also a lot of bad market research, bad design, bad business decision-making, bad engineering, and bad manufacturing. Let us not define the field based on its worst practice, or even on its lowest-common denominator practice. Failure to take into account the kinds of confounds Gilbert identifies is indeed bad practice because it will lead to misleading information. Handing over to a team narrow findings, minimally processed, excludes the usability practitioner from the integrative dialogue in which various inputs and courses of action are weighed against each other, and from the creative endeavor of proposing solutions. This will indeed limit usability practitioners to a tactical contributor role and will also result in products that are less likely to provide value for the users.

Finally, to any usability practitioners who think that usability is some kind of essence that resides in a product or design, and that can be objectively and accurately measured in the lab: Stop it. If you think that there is a simple definition of ease of use that can be assessed in an error-free way via a snapshot with an imperfect sample of representative users and simulated tasks: Stop it. If you think usability does not evolve over time or interact with user motivation and expectations and experience of benefit: Stop it. If you think that ease of use abstracted from everything else is the sole criterion for product success or experienced value: Stop it! If you think you are entitled to unilaterally impose your recommendations on team decision-making: Stop it. You are embarrassing the profession!