Publication statistics

Pub. period:2002-2012
Pub. count:37
Number of co-authors:58



Co-authors

Number of publications with 3 favourite co-authors:

Chia Shen:16
Clifton Forlines:13
Ravin Balakrishnan:10

 

 

Productive colleagues

Daniel Wigdor's 3 most productive colleagues in number of publications:

Ravin Balakrishnan:108
Jacob O. Wobbrock:71
Patrick Baudisch:57
 
 
 

Upcoming Courses

go to course
User Experience: The Beginner's Guide
Starts the day after tomorrow !
go to course
User-Centred Design - Module 2
92% booked. Starts in 3 days
 
 

Featured chapter

Marc Hassenzahl explains the fascinating concept of User Experience and Experience Design. Commentaries by Don Norman, Eric Reiss, Mark Blythe, and Whitney Hess

User Experience and Experience Design !

 
 

Our Latest Books

 
 
The Social Design of Technical Systems: Building technologies for communities. 2nd Edition
by Brian Whitworth and Adnan Ahmad
start reading
 
 
 
 
Gamification at Work: Designing Engaging Business Software
by Janaki Mythily Kumar and Mario Herger
start reading
 
 
 
 
The Social Design of Technical Systems: Building technologies for communities
by Brian Whitworth and Adnan Ahmad
start reading
 
 
 
 
The Encyclopedia of Human-Computer Interaction, 2nd Ed.
by Mads Soegaard and Rikke Friis Dam
start reading
 
 

Daniel Wigdor

Picture of Daniel Wigdor.
Update pic
Personal Homepage:
http://www.dgp.toronto.edu/~dwigdor

I am a Ph.D. candidate at the University of Toronto in the DGP lab with Professor Ravin Balakrishnan, and am also working with Dr. Chia Shen at Mitsubishi Electric Research Labs in Cambridge, MA. Since 2001, I have been an instructor in the Department of Computer Science. My professional interests are in mobile devices and in teaching, and academic interests are computer science (HCI), political science, and law.

 

Publications by Daniel Wigdor (bibliography)

 what's this?
2012
 
Edit | Del

Zarek, Adam, Wigdor, Daniel and Singh, Karan (2012): SNOUT: one-handed use of capacitive touch devices. In: Proceedings of the 2012 International Conference on Advanced Visual Interfaces 2012. pp. 140-147. Available online

SNOUT is a novel interface overlay designed for occasional no-hand or one-handed use of handheld capacitive touch devices. Inspired by the desire to use these devices in scenarios where visually focused bimanual input is awkward, we performed a pair of studies intended to evaluate the potential of the nose to provide touch input. These studies influenced our design principles, resulting in the construction of a 'nose mode' which enables object selection, continuous parameter control, and speech-based text entry. Selection is accomplished via a nose tap, using a colour overlay and peripheral colour feedback to correct mistakes. The other two techniques are activated by a nose tap, but use the accelerometer to control parameters and speech-to-text for text entry. An evaluation of SNOUT shows it to effectively render handheld capacitive touch devices operational in scenarios where they are presently unusable.

© All rights reserved Zarek et al. and/or ACM Press

 
Edit | Del

Yang, Xing-Dong, Grossman, Tovi, Wigdor, Daniel and Fitzmaurice, George (2012): Magic finger: always-available input through finger instrumentation. In: Proceedings of the 2012 ACM Symposium on User Interface Software and Technology 2012. pp. 147-156. Available online

We present Magic Finger, a small device worn on the fingertip, which supports always-available input. Magic Finger inverts the typical relationship between the finger and an interactive surface: with Magic Finger, we instrument the user's finger itself, rather than the surface it is touching. Magic Finger senses touch through an optical mouse sensor, enabling any surface to act as a touch screen. Magic Finger also senses texture through a micro RGB camera, allowing contextual actions to be carried out based on the particular surface being touched. A technical evaluation shows that Magic Finger can accurately

© All rights reserved Yang et al. and/or ACM Press

 
Edit | Del

Ng, Albert, Lepinski, Julian, Wigdor, Daniel, Sanders, Steven and Dietz, Paul (2012): Designing for low-latency direct-touch input. In: Proceedings of the 2012 ACM Symposium on User Interface Software and Technology 2012. pp. 453-464. Available online

Software designed for direct-touch interfaces often utilize a metaphor of direct physical manipulation of pseudo "real-world" objects. However, current touch systems typically take 50-200ms to update the display in response to a physical touch action. Utilizing a high performance touch demonstrator, subjects were able to experience touch latencies ranging from current levels down to about 1ms. Our tests show that users greatly prefer lower latencies, and noticeable improvement continued well below 10ms. This level of performance is difficult to achieve in commercial computing systems using current technologies. As an alternative, we propose a hybrid system that provides low-fidelity visu-al feedback immediately, followed by high-fidelity visuals at standard levels of latency.

© All rights reserved Ng et al. and/or ACM Press

 
Edit | Del

Block, Florian, Wigdor, Daniel, Phillips, Brenda Caldwell, Horn, Michael S. and Shen, Chia (2012): FlowBlocks: a multi-touch ui for crowd interaction. In: Proceedings of the 2012 ACM Symposium on User Interface Software and Technology 2012. pp. 497-508. Available online

Multi-touch technology lends itself to collaborative crowd interaction (CI). However, common tap-operated widgets are impractical for CI, since they are susceptible to accidental touches and interference from other users. We present a novel multi-touch interface called FlowBlocks in which every UI action is invoked through a small sequence of user actions: dragging parametric UI-Blocks, and dropping them over operational UI-Docks. The FlowBlocks approach is advantageous for CI because it a) makes accidental touches inconsequential; and b) introduces design parameters for mutual awareness, concurrent input, and conflict management. FlowBlocks was successfully used on the floor of a busy natural history museum. We present the complete design space and describe a year-long iterative design and evaluation process which employed the Rapid Iterative Test and Evaluation (RITE) method in a museum setting.

© All rights reserved Block et al. and/or ACM Press

2011
 
Edit | Del

Wigdor, Daniel, Benko, Hrvoje, Pella, John, Lombardo, Jarrod and Williams, Sarah (2011): Rock & rails: extending multi-touch interactions with shape gestures to enable precise spatial manipulations. In: Proceedings of ACM CHI 2011 Conference on Human Factors in Computing Systems 2011. pp. 1581-1590. Available online

Direct touch manipulations enable the user to interact with the on-screen content in a direct and easy manner closely mimicking the spatial manipulations in the physical world. However, they also suffer from well-known issues of precision, occlusion and an inability to isolate different degrees of freedom in spatial manipulations. We present a set of interactions, called Rock&Rails, that augment existing direct touch manipulations with shape-based gestures, thus providing on-demand gain control, occlusion avoidance, and separation of constraints in 2D manipulation tasks. Using shape gestures in combination with direct-manipulations allows us to do this without ambiguity in detection and without resorting to manipulation handles, which break the direct manipulation paradigm. Our set of interactions were evaluated by 8 expert graphic designers and were found to be easy to learn and master, as well as effective in accomplishing a precise graphical layout task.

© All rights reserved Wigdor et al. and/or their publisher

 
Edit | Del

Findlater, Leah, Wobbrock, Jacob O. and Wigdor, Daniel (2011): Typing on flat glass: examining ten-finger expert typing patterns on touch surfaces. In: Proceedings of ACM CHI 2011 Conference on Human Factors in Computing Systems 2011. pp. 2453-2462. Available online

Touch screen surfaces large enough for ten-finger input have become increasingly popular, yet typing on touch screens pales in comparison to physical keyboards. We examine typing patterns that emerge when expert users of physical keyboards touch-type on a flat surface. Our aim is to inform future designs of touch screen keyboards, with the ultimate goal of supporting touch-typing with limited tactile feedback. To study the issues inherent to flat-glass typing, we asked 20 expert typists to enter text under three conditions: (1) with no visual keyboard and no feedback on input errors, then (2) with and (3) without a visual keyboard, but with some feedback. We analyzed touch contact points and hand contours, looking at attributes such as natural finger positioning, the spread of hits among individual keys, and the pattern of non-finger touches. We also show that expert typists exhibit spatially consistent key press distributions within an individual, which provides evidence that eyes-free touch-typing may be possible on touch surfaces and points to the role of personalization in such a solution. We conclude with implications for design.

© All rights reserved Findlater et al. and/or their publisher

 
Edit | Del

Ni, Tao, Karlson, Amy K. and Wigdor, Daniel (2011): AnatOnMe: facilitating doctor-patient communication using a projection-based handheld device. In: Proceedings of ACM CHI 2011 Conference on Human Factors in Computing Systems 2011. pp. 3333-3342. Available online

In this paper, we explore the use of a projection-based handheld device to facilitate in-clinic doctor-patient communication. We present the user-centered design process used to understand the workflow of medical professionals and to identify challenges they currently face in communicating information to patients. Based on the lessons learned, we developed AnatOnMe, a prototype projection-based hand-held system for enhancing information exchange in the current practice of one medical sub-specialty, physical therapy. We then present the results of a controlled experiment to understand the desirability and learning tradeoffs of using AnatOnMe to teach medical concepts on three potential projection surfaces -- wall, model, and patient body. Finally, we present results of two expert reviews of the system.

© All rights reserved Ni et al. and/or their publisher

 
Edit | Del

Kane, Shaun K., Morris, Meredith Ringel, Perkins, Annuska Z., Wigdor, Daniel, Ladner, Richard E. and Wobbrock, Jacob O. (2011): Access overlays: improving non-visual access to large touch screens for blind users. In: Proceedings of the 2011 ACM Symposium on User Interface Software and Technology 2011. pp. 273-282. Available online

Many touch screens remain inaccessible to blind users, and those approaches to providing access that do exist offer minimal support for interacting with large touch screens or spatial data. In this paper, we introduce a set of three software-based access overlays intended to improve the accessibility of large touch screen interfaces, specifically interactive tabletops. Our access overlays are called edge projection, neighborhood browsing, and touch-and-speak. In a user study, 14 blind users compared access overlays to an implementation of Apple's VoiceOver screen reader. Our results show that two of our techniques were faster than VoiceOver, that participants correctly answered more questions about the screen's layout using our techniques, and that participants overwhelmingly preferred our techniques. We developed several applications demonstrating the use of access overlays, including an accessible map kiosk and an accessible board game.

© All rights reserved Kane et al. and/or ACM Press

 
Edit | Del

Annett, Michelle, Grossman, Tovi, Wigdor, Daniel and Fitzmaurice, George (2011): Medusa: a proximity-aware multi-touch tabletop. In: Proceedings of the 2011 ACM Symposium on User Interface Software and Technology 2011. pp. 337-346. Available online

We present Medusa, a proximity-aware multi-touch tabletop. Medusa uses 138 inexpensive proximity sensors to: detect a user's presence and location, determine body and arm locations, distinguish between the right and left arms, and map touch point to specific users and specific hands. Our tracking algorithms and hardware designs are described. Exploring this unique design, we develop and report on a collection of interactions enabled by Medusa in support of multi-user collaborative design, specifically within the context of Proxi-Sketch, a multi-user UI prototyping tool. We discuss design issues, system implementation, limitations, and generalizable concepts throughout the paper.

© All rights reserved Annett et al. and/or ACM Press

 
Edit | Del

Wigdor, Daniel and Wixon, Dennis (2011): Brave NUI World: Designing Natural User Interfaces for Touch and Gesture. Morgan Kaufmann

Touch and gestural devices have been hailed as next evolutionary step in human-computer interaction. As software companies struggle to catch up with one another in terms of developing the next great touch-based interface, designers are charged with the daunting task of keeping up with the advances in new technology and this new aspect to user experience design. Product and interaction designers, developers and managers are already well versed in UI design, but touch-based interfaces have added a new level of complexity. They need quick references and real-world examples in order to make informed decisions when designing for these particular interfaces. Brave NUI World is the first practical book for product and interaction developers and designing touch and gesture interfaces. Written by developers of industry-first, multi-touch, multi-user products, this book gives you the necessary tools and information to integrate touch and gesture practices into your daily work, presenting scenarios, problem solving, metaphors, and techniques intended to avoid making mistakes.*Provides easy-to-apply design guidance for the unique challenge of creating touch- and gesture-based user interfaces *Considers diverse user needs and context, real world successes and failures, and a look into the future of NUI *Presents thirty scenarios, giving practitioners a multitude of considerations for making informed design decisions and helping to ensure that missteps are never made again

© All rights reserved Wigdor and Wixon and/or Morgan Kaufmann

2010
 
Edit | Del

Wigdor, Daniel (2010): Architecting next-generation user interfaces. In: Proceedings of the 2010 International Conference on Advanced Visual Interfaces 2010. pp. 16-22. Available online

Emerging technologies provide platforms for new devices, applications, and user interfaces. These technologies have shown potential in early research, but their true utility and measures of success lie in their ability to reflect and enhance the capabilities of the people who use them. My research seeks to address this problem by thoroughly examining and understanding humans, hardware, and software to create tools that enable users in new ways and meet real needs. In this talk, I will discuss both sides of the coin: the potential, and the limitations of emerging input technologies that require fundamentally different user interface designs to realize their full utility. With particular focus on the area of multi-touch and surface computing, I will describe how leveraging and mirroring human motor, cognitive, and social abilities and needs can produce interfaces that are both learnable and enabling of high-bandwidth communication between the user and the computer. Further, such leverage and reflection also ensures that the resulting tools solve real problems and enable their users in ways that a traditional mouse-based user interface do not.

© All rights reserved Wigdor and/or his/her publisher

 
Edit | Del

Wigdor, Daniel and Morrison, Gerald (2010): Designing user interfaces for multi-touch and surface-gesture devices. In: Proceedings of ACM CHI 2010 Conference on Human Factors in Computing Systems 2010. pp. 3193-3196. Available online

Initially Designers only had a keyboard and lines of text to design. Then, the mouse enabled a richer design ecosystem with two dimensional plains of UI. Now the Design and Research communities have access to multi-touch and gestural interfaces which have been released on a mass market scale. This allows them to design and develop new, unique, and richer design patterns and approaches. These methods are no longer confined to research projects or innovation labs, but are now offered on a large scale to millions of consumers. With these new interface behaviors, in combination with multiple types of hardware devices that can affect the interface, there are new problems and patterns that have increased the complexity of designing interfaces. The aim of this SIG is to provide a forum for Designers, Researchers, and Usability Professionals to discuss this new and emerging technology trends for multi-touch and gesture interfaces, as well as discuss current design patterns within these interfaces. Our goal is to cross pollinate ideas and current solutions from practitioners and researchers across communities to help drive awareness of this new field for those interested in, just starting in, or currently involved in the design of these systems.

© All rights reserved Wigdor and Morrison and/or their publisher

 
Edit | Del

Morris, Meredith Ringel, Lombardo, Jarrod and Wigdor, Daniel (2010): WeSearch: supporting collaborative search and sensemaking on a tabletop display. In: Proceedings of ACM CSCW10 Conference on Computer-Supported Cooperative Work 2010. pp. 401-410. Available online

Groups of users often have shared information needs -- for example, business colleagues need to conduct research relating to joint projects and students must work together on group homework assignments. In this paper, we introduce WeSearch, a collaborative Web search system designed to leverage the benefits of tabletop displays for face-to-face collaboration and organization tasks. We describe the design of WeSearch and explain the interactions it affords. We then describe an evaluation in which eleven groups used WeSearch to conduct real collaborative search tasks. Based on our study's findings, we analyze the effectiveness of the features introduced by WeSearch.

© All rights reserved Morris et al. and/or their publisher

2009
 
Edit | Del

Wigdor, Daniel, Jiang, Hao, Forlines, Clifton, Borkin, Michelle and Shen, Chia (2009): WeSpace: the design development and deployment of a walk-up and share multi-surface visual collaboration system. In: Proceedings of ACM CHI 2009 Conference on Human Factors in Computing Systems 2009. pp. 1237-1246. Available online

We present WeSpace -- a collaborative work space that integrates a large data wall with a multi-user multi-touch table. WeSpace has been developed for a population of scientists who frequently meet in small groups for data exploration and visualization. It provides a low overhead walk-up and share environment for users with their own personal applications and laptops. We present our year-long effort from initial ethnographic studies, to iterations of design, development and user testing, to the current experiences of these scientists carrying out their collaborative research in the WeSpace. We shed light on the utility, the value of the multi-touch table, the manifestation, usage patterns and the changes in their workflow that WeSpace has brought about.

© All rights reserved Wigdor et al. and/or ACM Press

 
Edit | Del

Wigdor, Daniel, Fletcher, Joe and Morrison, Gerald (2009): Designing user interfaces for multi-touch and gesture devices. In: Proceedings of ACM CHI 2009 Conference on Human Factors in Computing Systems 2009. pp. 2755-2758. Available online

Initially Designers only had a keyboard and lines of text to design. Then, the mouse enabled a richer design ecosystem with two dimensional plains of UI. Now the Design and Research communities have access to multi-touch and gestural interfaces which have been released on a mass market scale. This allows them to design and develop new, unique, and richer design patterns and approaches. These methods are no longer confined to research projects or innovation labs, but are now offered on a large scale to millions of consumers. With these new interface behaviors, in combination with multiple types of hardware devices that can affect the interface, there are new problems and patterns that have increased the complexity of designing interfaces. The aim of this SIG is to provide a forum for Designers, Researchers, and Usability Professionals to discuss this new and emerging technology trends for multi-touch and gesture interfaces, as well as discuss current design patterns within these interfaces. Our goal is to cross pollinate ideas and current solutions from practitioners and researchers across communities to help drive awareness of this new field for those interested in, just starting in, or currently involved in the design of these systems.

© All rights reserved Wigdor et al. and/or ACM Press

 
Edit | Del

Freeman, Dustin, Benko, Hrvoje, Morris, Meredith Ringel and Wigdor, Daniel (2009): ShadowGuides: visualizations for in-situ learning of multi-touch and whole-hand gestures. In: Proceedings of the 2009 ACM International Conference on Interactive Tabletops and Surfaces 2009. pp. 165-172. Available online

We present ShadowGuides, a system for in-situ learning of multi-touch and whole-hand gestures on interactive surfaces. ShadowGuides provides on-demand assistance to the user by combining visualizations of the user's current hand posture as interpreted by the system (feedback) and available postures and completion paths necessary to finish the gesture (feedforward). Our experiment compared participants learning gestures with ShadowGuides to those learning with video-based instruction. We found that participants learning with ShadowGuides remembered more gestures and expressed significantly higher preference for the help system.

© All rights reserved Freeman et al. and/or their publisher

 
Edit | Del

Wigdor, Daniel, Williams, Sarah, Cronin, Michael, Levy, Robert, White, Katie, Mazeev, Maxim and Benko, Hrvoje (2009): Ripples: utilizing per-contact visualizations to improve user interaction with touch displays. In: Proceedings of the ACM Symposium on User Interface Software and Technology 2009. pp. 3-12. Available online

We present Ripples, a system which enables visualizations around each contact point on a touch display and, through these visualizations, provides feedback to the user about successes and errors of their touch interactions. Our visualization system is engineered to be overlaid on top of existing applications without requiring the applications to be modified in any way, and functions independently of the application's responses to user input. Ripples reduces the fundamental problem of ambiguity of feedback when an action results in an unexpected behaviour. This ambiguity can be caused by a wide variety of sources. We describe the ambiguity problem, and identify those sources. We then define a set of visual states and transitions needed to resolve this ambiguity, of use to anyone designing touch applications or systems. We then present the Ripples implementation of visualizations for those states, and the results of a user study demonstrating user preference for the system, and demonstrating its utility in reducing errors.

© All rights reserved Wigdor et al. and/or their publisher

2008
 
Edit | Del

Jiang, Hao, Wigdor, Daniel, Forlines, Clifton, Borkin, Michelle, Kauffmann, Jens and Shen, Chia (2008): LivOlay: interactive ad-hoc registration and overlapping of applications for collaborative visual exploration. In: Proceedings of ACM CHI 2008 Conference on Human Factors in Computing Systems April 5-10, 2008. pp. 1357-1360. Available online

The interoperability of disparate data types and sources has been a long standing problem and a hindering factor for the efficacy and efficiency in visual exploration applications. In this paper, we present a solution, called LivOlay, that enables the rapid visual overlay of live data rendered in different applications. Our tool addresses datasets in which visual registration of the information is necessary in order to allow for thorough understanding and visual analysis. We also discuss initial evaluation and user feedback of LivOlay.

© All rights reserved Jiang et al. and/or ACM Press

 
Edit | Del

Jiang, Hao, Wigdor, Daniel, Forlines, Clifton and Shen, Chia (2008): System design for the WeSpace: Linking personal devices to a table-centered multi-user, multi-surface environment. In: Third IEEE International Workshop on Tabletops and Interactive Surfaces Tabletop 2008 October 1-3, 2008, Amsterdam, The Netherlands. pp. 97-104. Available online

 
Edit | Del

Brandl, Peter, Forlines, Clifton, Wigdor, Daniel, Haller, Michael and Shen, Chia (2008): Combining and measuring the benefits of bimanual pen and direct-touch interaction on horizontal interfaces. In: Levialdi, Stefano (ed.) AVI 2008 - Proceedings of the working conference on Advanced Visual Interfaces May 28-30, 2008, Napoli, Italy. pp. 154-161. Available online

2007
 
Edit | Del

Wigdor, Daniel, Shen, Chia, Forlines, Clifton and Balakrishnan, Ravin (2007): Perception of elementary graphical elements in tabletop and multi-surface environments. In: Proceedings of ACM CHI 2007 Conference on Human Factors in Computing Systems 2007. pp. 473-482. Available online

Information shown on a tabletop display can appear distorted when viewed by a seated user. Even worse, the impact of this distortion is different depending on the location of the information on the display. In this paper, we examine how this distortion affects the perception of the basic graphical elements of information visualization shown on displays at various angles. We first examine perception of these elements on a single display, and then compare this to perception across displays, in order to evaluate the effectiveness of various elements for use in a tabletop and multi-display environment. We found that the perception of some graphical elements is more robust to distortion than others. We then develop recommendations for building data visualizations for these environments.

© All rights reserved Wigdor et al. and/or ACM Press

 
Edit | Del

Grossman, Tovi, Wigdor, Daniel and Balakrishnan, Ravin (2007): Exploring and reducing the effects of orientation on text readability in volumetric displays. In: Proceedings of ACM CHI 2007 Conference on Human Factors in Computing Systems 2007. pp. 483-492. Available online

Volumetric displays, which provide a 360 view of imagery illuminated in true 3D space, are a promising platform for interactive 3D applications. However, presenting text in volumetric displays can be a challenge, as the text may not be oriented towards the user. This is especially problematic with multiple viewers, as the text could, for example, appear forwards to one user, and backwards to another. In a first experiment we determined the effects of 3D rotations on text readability. Based on the results, we developed and evaluated a new technique which optimizes text orientation for multiple viewers. This technique provided 33% faster group reading times in a collaborative experimental task.

© All rights reserved Grossman et al. and/or ACM Press

 
Edit | Del

Forlines, Clifton, Wigdor, Daniel, Shen, Chia and Balakrishnan, Ravin (2007): Direct-touch vs. mouse input for tabletop displays. In: Proceedings of ACM CHI 2007 Conference on Human Factors in Computing Systems 2007. pp. 647-656. Available online

We investigate the differences -- in terms of both quantitative performance and subjective preference -- between direct-touch and mouse input for unimanual and bimanual tasks on tabletop displays. The results of two experiments show that for bimanual tasks performed on tabletops, users benefit from direct-touch input. However, our results also indicate that mouse input may be more appropriate for a single user working on tabletop tasks requiring only single-point interaction.

© All rights reserved Forlines et al. and/or ACM Press

 
Edit | Del

Wigdor, Daniel, Forlines, Clifton, Baudisch, Patrick, Barnwell, John and Shen, Chia (2007): Lucid touch: a see-through mobile device. In: Proceedings of the ACM Symposium on User Interface Software and Technology October 7-10, 2007, Newport, Rhode Island, USA. pp. 269-278. Available online

Touch is a compelling input modality for interactive devices; however, touch input on the small screen of a mobile device is problematic because a user's fingers occlude the graphical elements he wishes to work with. In this paper, we present LucidTouch, a mobile device that addresses this limitation by allowing the user to control the application by touching the back of the device. The key to making this usable is what we call pseudo-transparency: by overlaying an image of the user's hands onto the screen, we create the illusion of the mobile device itself being semi-transparent. This pseudo-transparency allows users to accurately acquire targets while not occluding the screen with their fingers and hand. Lucid Touch also supports multi-touch input, allowing users to operate the device simultaneously with all 10 fingers. We present initial study results that indicate that many users found touching on the back to be preferable to touching on the front, due to reduced occlusion, higher precision, and the ability to make multi-finger input.

© All rights reserved Wigdor et al. and/or ACM Press

 
Edit | Del

Grossman, Tovi and Wigdor, Daniel (2007): Going Deeper: a Taxonomy of 3D on the Tabletop. In: Second IEEE International Workshop on Horizontal Interactive Human-Computer Systems Tabletop 2007 October 10-12, 2007, Newport, Rhode Island, USA. pp. 137-144. Available online

 
Edit | Del

Wigdor, Daniel, Penn, Gerald, Ryall, Kathy, Esenther, Alan and Shen, Chia (2007): Living with a Tabletop: Analysis and Observations of Long Term Office Use of a Multi-Touch Table. In: Second IEEE International Workshop on Horizontal Interactive Human-Computer Systems Tabletop 2007 October 10-12, 2007, Newport, Rhode Island, USA. pp. 60-67. Available online

2006
 
Edit | Del

Wigdor, Daniel, Shen, Chia, Forlines, Clifton and Balakrishnan, Ravin (2006): Effects of display position and control space orientation on user preference and performance. In: Proceedings of ACM CHI 2006 Conference on Human Factors in Computing Systems 2006. pp. 309-318. Available online

In many environments, it is often the case that input is made to displays that are positioned non-traditionally relative to one or more users. This typically requires users to perform interaction tasks under transformed input-display spatial mappings, and the literature is unclear as to how such transformations affect performance. We present two experiments that explore the impact of display space position and input control space orientation on user's subjective preference and objective performance in a docking task. Our results provide guidelines as to optimal display placement and control orientation in collaborative computing environments with one or more shared displays.

© All rights reserved Wigdor et al. and/or ACM Press

 
Edit | Del

Forlines, Clifton, Shen, Chia, Wigdor, Daniel and Balakrishnan, Ravin (2006): Exploring the effects of group size and display configuration on visual search. In: Proceedings of ACM CSCW06 Conference on Computer-Supported Cooperative Work 2006. pp. 11-20. Available online

Visual search is the subject of countless psychology studies in which people search for target items within a scene. The bulk of this literature focuses on the individual with the goal of understanding the human perceptual system. In life, visual search is performed not only by individuals, but also by groups -- a team of doctors may study an x-ray and a team of analysts may study a satellite photograph. In this paper, we examine the issues one should consider when searching as a group. We present the details of an experiment designed to investigate the impact of group size on visual search performance, and how different display configurations affected that performance. We asked individuals, pairs, and groups of four people to participate in a baggage screening task in which these teams searched simulated x-rays for prohibited items. Teams conducted these searches on single monitors, a row of four monitors, and on a single horizontal display. Our findings suggest that groups commit far fewer errors in visual search tasks, although they may perform slower than individuals under certain conditions. The interaction between group size and display configuration turned out to be an important factor as well.

© All rights reserved Forlines et al. and/or ACM Press

 
Edit | Del

Wigdor, Daniel, Leigh, Darren, Forlines, Clifton, Shipman, Samuel, Barnwell, John, Balakrishnan, Ravin and Shen, Chia (2006): Under the table interaction. In: Proceedings of the ACM Symposium on User Interface Software and Technology 2006. pp. 259-268. Available online

We explore the design space of a two-sided interactive touch table, designed to receive touch input from both the top and bottom surfaces of the table. By combining two registered touch surfaces, we are able to offer a new dimension of input for co-located collaborative groupware. This design accomplishes the goal of increasing the relative size of the input area of a touch table while maintaining its direct-touch input paradigm. We describe the interaction properties of this two-sided touch table, report the results of a controlled experiment examining the precision of user touches to the underside of the table, and a series of application scenarios we developed for use on inverted and two-sided tables. Finally, we present a list of design recommendations based on our experiences and observations with inverted and two-sided tables.

© All rights reserved Wigdor et al. and/or ACM Press

 
Edit | Del

Forlines, Clifton, Esenther, Alan, Shen, Chia, Wigdor, Daniel and Ryall, Kathy (2006): Multi-user, multi-display interaction with a single-user, single-display geospatial application. In: Proceedings of the ACM Symposium on User Interface Software and Technology 2006. pp. 273-276. Available online

In this paper, we discuss our adaptation of a single-display, single-user commercial application for use in a multi-device, multi-user environment. We wrap Google Earth, a popular geospatial application, in a manner that allows for synchronized coordinated views among multiple instances running on different machines in the same co-located environment. The environment includes a touch-sensitive tabletop display, three vertical wall displays, and a TabletPC. A set of interaction techniques that allow a group to manage and exploit this collection of devices is presented.

© All rights reserved Forlines et al. and/or ACM Press

 
Edit | Del

Hancock, Mark S., Carpendale, M. Sheelagh T., Vernier, Frederic, Wigdor, Daniel and Shen, Chia (2006): Rotation and Translation Mechanisms for Tabletop Interaction. In: First IEEE International Workshop on Horizontal Interactive Human-Computer Systems Tabletop 2006 5-7 January, 2006, Adelaide, Australia. pp. 79-88. Available online

 
Edit | Del

Wigdor, Daniel, Shen, Chia, Forlines, Clifton and Balakrishnan, Ravin (2006): Table-centric interactive spaces for real-time collaboration. In: Celentano, Augusto (ed.) AVI 2006 - Proceedings of the working conference on Advanced visual interfaces May 23-26, 2006, Venezia, Italy. pp. 103-107. Available online

 
Edit | Del

Shen, Chia, Ryall, Kathy, Forlines, Clifton, Esenther, Alan, Vernier, Frederic, Everitt, Katherine, Wu, Mike, Wigdor, Daniel, Morris, Meredith Ringel, Hancock, Mark S. and Tse, Edward (2006): Informing the Design of Direct-Touch Tabletops. In IEEE Computer Graphics and Applications, 26 (5) pp. 36-46. Available online

2004
 
Edit | Del

Wigdor, Daniel and Balakrishnan, Ravin (2004): A comparison of consecutive and concurrent input text entry techniques for mobile phones. In: Dykstra-Erickson, Elizabeth and Tscheligi, Manfred (eds.) Proceedings of ACM CHI 2004 Conference on Human Factors in Computing Systems April 24-29, 2004, Vienna, Austria. pp. 81-88. Available online

The numeric keypads on mobile phones generally consist of 12 keys (0-9, *, #). Ambiguity arises when the 36-character alpha-numeric English alphabet is mapped onto this smaller number of keys. In this paper, we first present a taxonomy of the various techniques for resolving this ambiguity, dividing them into techniques that use consecutive actions to first select a character grouping and then a character from within that grouping, and those that use concurrent actions to achieve the same end. We then present the design and implementation of a chording approach to text entry that uses concurrent key presses. We conducted a controlled experiment that compared this chording technique to one-handed and two-handed versions of the commonly used MultiTap technique. The results show that the concurrent chording technique significantly outperforms both versions of the consecutive action MultiTap technique.

© All rights reserved Wigdor and Balakrishnan and/or ACM Press

 
Edit | Del

Grossman, Tovi, Wigdor, Daniel and Balakrishnan, Ravin (2004): Multi-finger gestural interaction with 3d volumetric displays. In: Proceedings of the 2004 ACM Symposium on User Interface Software and Technology 2004. pp. 61-70. Available online

Volumetric displays provide interesting opportunities and challenges for 3D interaction and visualization, particularly when used in a highly interactive manner. We explore this area through the design and implementation of techniques for interactive direct manipulation of objects with a 3D volumetric display. Motion tracking of the user\'s fingers provides for direct gestural interaction with the virtual objects, through manipulations on and around the display\'s hemispheric enclosure. Our techniques leverage the unique features of volumetric displays, including a 360{deg} viewing volume that enables manipulation from any viewpoint around the display, as well as natural and accurate perception of true depth information in the displayed 3D scene. We demonstrate our techniques within a prototype 3D geometric model building application.

© All rights reserved Grossman et al. and/or ACM Press

2003
 
Edit | Del

Wigdor, Daniel and Balakrishnan, Ravin (2003): TiltText: using tilt for text input to mobile phones. In: Proceedings of the 16th annural ACM Symposium on User Interface Software and Technology November, 2-5, 2003, Vancouver, Canada. pp. 81-90. Available online

TiltText, a new technique for entering text into a mobile phone is described. The standard 12-button text entry keypad of a mobile phone forces ambiguity when the 26- letter Roman alphabet is mapped in the traditional manner onto keys 2-9. The TiltText technique uses the orientation of the phone to resolve this ambiguity, by tilting the phone in one of four directions to choose which character on a particular key to enter. We first discuss implementation strategies, and then present the results of a controlled experiment comparing TiltText to MultiTap, the most common text entry technique. The experiment included 10 participants who each entered a total of 640 phrases of text chosen from a standard corpus, over a period of about five hours. The results show that text entry speed including correction for errors using TiltText was 23% faster than MultiTap by the end of the experiment, despite a higher error rate for TiltText. TiltText is thus amongst the fastest known language-independent techniques for entering text into mobile phones.

© All rights reserved Wigdor and Balakrishnan and/or ACM Press

2002
 
Edit | Del

Schraefel, M. C., Zhu, Yuxiang, Modjeska, David, Wigdor, Daniel and Zhao, Shengdong (2002): Hunter gatherer: interaction support for the creation and management of within-web-page collections. In: Proceedings of the 2002 International Conference on the World Wide Web 2002. pp. 172-181. Available online

Hunter Gatherer is an interface that lets Web users carry out three main tasks: (1) collect components from within Web pages; (2) represent those components in a collection; (3) edit those component collections. Our research shows that while the practice of making collections of content from within Web pages is common, it is not frequent, due in large part to poor interaction support in existing tools. We engaged with users in task analysis as well as iterative design reviews in order to understand the interaction issues that are part of within-Web-page collection making and to design an interaction that would support that process. We report here on that design development, as well as on the evaluations of the tool that evolved from that process, and the future work stemming from these results, in which our critical question is: what happens to users perceptions and expectations of web-based information (their web-based information management practices) when they can treat this information as harvestable, recontextualizable data, rather than as fixed pages?

© All rights reserved Schraefel et al. and/or ACM Press

 
Add publication
Show list on your website
 
 

Join our community and advance:

Your
Skills

Your
Network

Your
Career

 
Join our community!
 
 
 

Page Information

Page maintainer: The Editorial Team
URL: http://www.interaction-design.org/references/authors/daniel_wigdor.html