Publication statistics

Pub. period:2002-2014
Pub. count:46
Number of co-authors:40



Co-authors

Number of publications with 3 favourite co-authors:

Clifton Forlines:33
Daniel Wigdor:16
Edward Tse:12

 

 

Productive colleagues

Chia Shen's 3 most productive colleagues in number of publications:

Saul Greenberg:140
Ravin Balakrishnan:108
Yvonne Rogers:99
 
 
 

Upcoming Courses

go to course
UI Design Patterns for Successful Software
Starts the day after tomorrow !
go to course
Affordances: Designing Intuitive User Interfaces
92% booked. Starts in 3 days
 
 

Featured chapter

Marc Hassenzahl explains the fascinating concept of User Experience and Experience Design. Commentaries by Don Norman, Eric Reiss, Mark Blythe, and Whitney Hess

User Experience and Experience Design !

 
 

Our Latest Books

 
 
The Social Design of Technical Systems: Building technologies for communities. 2nd Edition
by Brian Whitworth and Adnan Ahmad
start reading
 
 
 
 
Gamification at Work: Designing Engaging Business Software
by Janaki Mythily Kumar and Mario Herger
start reading
 
 
 
 
The Social Design of Technical Systems: Building technologies for communities
by Brian Whitworth and Adnan Ahmad
start reading
 
 
 
 
The Encyclopedia of Human-Computer Interaction, 2nd Ed.
by Mads Soegaard and Rikke Friis Dam
start reading
 
 

Chia Shen

Personal Homepage:
scholar.harvard.edu/chiashen/publications


 

Publications by Chia Shen (bibliography)

 what's this?
2014

Pliable display Technology on a table (3 minutes) (2014)

2012
 
Edit | Del

Block, Florian, Wigdor, Daniel, Phillips, Brenda Caldwell, Horn, Michael S. and Shen, Chia (2012): FlowBlocks: a multi-touch ui for crowd interaction. In: Proceedings of the 2012 ACM Symposium on User Interface Software and Technology 2012. pp. 497-508. Available online

Multi-touch technology lends itself to collaborative crowd interaction (CI). However, common tap-operated widgets are impractical for CI, since they are susceptible to accidental touches and interference from other users. We present a novel multi-touch interface called FlowBlocks in which every UI action is invoked through a small sequence of user actions: dragging parametric UI-Blocks, and dropping them over operational UI-Docks. The FlowBlocks approach is advantageous for CI because it a) makes accidental touches inconsequential; and b) introduces design parameters for mutual awareness, concurrent input, and conflict management. FlowBlocks was successfully used on the floor of a busy natural history museum. We present the complete design space and describe a year-long iterative design and evaluation process which employed the Rapid Iterative Test and Evaluation (RITE) method in a museum setting.

© All rights reserved Block et al. and/or ACM Press

2010
 
Edit | Del

Tse, Edward, Schoning, Johannes, Rogers, Yvonne, Shen, Chia and Morrison, Gerald (2010): Next generation of HCI and education: workshop on UI technologies and educational pedagogy. In: Proceedings of ACM CHI 2010 Conference on Human Factors in Computing Systems 2010. pp. 4509-4512. Available online

Given the exponential growth of interactive whiteboards in classrooms around the world, and the recent emergence of multi-touch tables, tangible computing devices and mobile devices, there has been a need to explore how next generation HCI will impact education in the future. Educators are depending on the interaction communities to deliver technologies that will improve/adapt learning to an ever-changing world. In addition to novel UI concepts, the HCI community needs to examine how these concepts can be matched to contemporary paradigms in Educational pedagogy. The classroom is a challenging environment for evaluation, thus new interaction techniques need to be established to prove the value of new HCI interactions in the educational space. This workshop provides a forum to discuss key HCI issues facing next generation education ranging from whole class interactive whiteboards, small group interactive multi-touch tables, and individual personal response systems in the classroom.

© All rights reserved Tse et al. and/or their publisher

2009
 
Edit | Del

Wigdor, Daniel, Jiang, Hao, Forlines, Clifton, Borkin, Michelle and Shen, Chia (2009): WeSpace: the design development and deployment of a walk-up and share multi-surface visual collaboration system. In: Proceedings of ACM CHI 2009 Conference on Human Factors in Computing Systems 2009. pp. 1237-1246. Available online

We present WeSpace -- a collaborative work space that integrates a large data wall with a multi-user multi-touch table. WeSpace has been developed for a population of scientists who frequently meet in small groups for data exploration and visualization. It provides a low overhead walk-up and share environment for users with their own personal applications and laptops. We present our year-long effort from initial ethnographic studies, to iterations of design, development and user testing, to the current experiences of these scientists carrying out their collaborative research in the WeSpace. We shed light on the utility, the value of the multi-touch table, the manifestation, usage patterns and the changes in their workflow that WeSpace has brought about.

© All rights reserved Wigdor et al. and/or ACM Press

 
Edit | Del

Jiang, Hao, Viel, Alain, Bajaj, Meekal, Lue, Robert A. and Shen, Chia (2009): CThru: exploration in a video-centered information space for educational purposes. In: Proceedings of ACM CHI 2009 Conference on Human Factors in Computing Systems 2009. pp. 1247-1250. Available online

We present CThru, a self-guided video-based educational environment in a large multi-display setting. We employ a video-centered approach, creating and combining multimedia contents of different formats with a story-telling education video. With the support of new display form factors in the environment, viewing a sequential educational video thread is replaced by the immersive learning experience of hands-on exploration and manipulation in a multi-dimensional information space. We demonstrate CThru with an animation clip in cellular biology, supplementing visible objects in the video with rich domain-specific multimedia information and interactive 3D models. We describe CThru's design rationale and implementation. We also discuss a pilot study and what it revealed with respect to CThru's interface and the usage pattern of the tabletop and the associated large wall display.

© All rights reserved Jiang et al. and/or ACM Press

2008
 
Edit | Del

Jiang, Hao, Wigdor, Daniel, Forlines, Clifton, Borkin, Michelle, Kauffmann, Jens and Shen, Chia (2008): LivOlay: interactive ad-hoc registration and overlapping of applications for collaborative visual exploration. In: Proceedings of ACM CHI 2008 Conference on Human Factors in Computing Systems April 5-10, 2008. pp. 1357-1360. Available online

The interoperability of disparate data types and sources has been a long standing problem and a hindering factor for the efficacy and efficiency in visual exploration applications. In this paper, we present a solution, called LivOlay, that enables the rapid visual overlay of live data rendered in different applications. Our tool addresses datasets in which visual registration of the information is necessary in order to allow for thorough understanding and visual analysis. We also discuss initial evaluation and user feedback of LivOlay.

© All rights reserved Jiang et al. and/or ACM Press

 
Edit | Del

Tse, Edward, Greenberg, Saul, Shen, Chia, Forlines, Clifton and Kodama, Ryo (2008): Exploring true multi-user multimodal interaction over a digital table. In: Proceedings of DIS08 Designing Interactive Systems 2008. pp. 109-118. Available online

True multi-user, multimodal interaction over a digital table lets co-located people simultaneously gesture and speak commands to control an application. We explore this design space through a case study, where we implemented an application that supports the KJ creativity method as used by industrial designers. Four key design issues emerged that have a significant impact on how people would use such a multi-user multimodal system. First, parallel work is affected by the design of multimodal commands. Second, individual mode switches can be confusing to collaborators, especially if speech commands are used. Third, establishing personal and group territories can hinder particular tasks that require artefact neutrality. Finally, timing needs to be considered when designing joint multimodal commands. We also describe our model view controller architecture for true multi-user multimodal interaction.

© All rights reserved Tse et al. and/or ACM Press

 
Edit | Del

Jiang, Hao, Wigdor, Daniel, Forlines, Clifton and Shen, Chia (2008): System design for the WeSpace: Linking personal devices to a table-centered multi-user, multi-surface environment. In: Third IEEE International Workshop on Tabletops and Interactive Surfaces Tabletop 2008 October 1-3, 2008, Amsterdam, The Netherlands. pp. 97-104. Available online

 
Edit | Del

Brandl, Peter, Forlines, Clifton, Wigdor, Daniel, Haller, Michael and Shen, Chia (2008): Combining and measuring the benefits of bimanual pen and direct-touch interaction on horizontal interfaces. In: Levialdi, Stefano (ed.) AVI 2008 - Proceedings of the working conference on Advanced Visual Interfaces May 28-30, 2008, Napoli, Italy. pp. 154-161. Available online

2007
 
Edit | Del

Tse, Edward, Shen, Chia, Greenberg, Saul and Forlines, Clifton (2007): How pairs interact over a multimodal digital table. In: Proceedings of ACM CHI 2007 Conference on Human Factors in Computing Systems 2007. pp. 215-218. Available online

Co-located collaborators often work over physical tabletops using combinations of expressive hand gestures and verbal utterances. This paper provides the first observations of how pairs of people communicated and interacted in a multimodal digital table environment built atop existing single user applications. We contribute to the understanding of these environments in two ways. First, we saw that speech and gesture commands served double duty as both commands to the computer, and as implicit communication to others. Second, in spite of limitations imposed by the underlying single-user application, people were able to work together simultaneously, and they performed interleaving acts: the graceful mixing of inter-person speech and gesture actions as commands to the system. This work contributes to the intricate understanding of multi-user multimodal digital table interaction.

© All rights reserved Tse et al. and/or ACM Press

 
Edit | Del

Wigdor, Daniel, Shen, Chia, Forlines, Clifton and Balakrishnan, Ravin (2007): Perception of elementary graphical elements in tabletop and multi-surface environments. In: Proceedings of ACM CHI 2007 Conference on Human Factors in Computing Systems 2007. pp. 473-482. Available online

Information shown on a tabletop display can appear distorted when viewed by a seated user. Even worse, the impact of this distortion is different depending on the location of the information on the display. In this paper, we examine how this distortion affects the perception of the basic graphical elements of information visualization shown on displays at various angles. We first examine perception of these elements on a single display, and then compare this to perception across displays, in order to evaluate the effectiveness of various elements for use in a tabletop and multi-display environment. We found that the perception of some graphical elements is more robust to distortion than others. We then develop recommendations for building data visualizations for these environments.

© All rights reserved Wigdor et al. and/or ACM Press

 
Edit | Del

Forlines, Clifton, Wigdor, Daniel, Shen, Chia and Balakrishnan, Ravin (2007): Direct-touch vs. mouse input for tabletop displays. In: Proceedings of ACM CHI 2007 Conference on Human Factors in Computing Systems 2007. pp. 647-656. Available online

We investigate the differences -- in terms of both quantitative performance and subjective preference -- between direct-touch and mouse input for unimanual and bimanual tasks on tabletop displays. The results of two experiments show that for bimanual tasks performed on tabletops, users benefit from direct-touch input. However, our results also indicate that mouse input may be more appropriate for a single user working on tabletop tasks requiring only single-point interaction.

© All rights reserved Forlines et al. and/or ACM Press

 
Edit | Del

Wigdor, Daniel, Forlines, Clifton, Baudisch, Patrick, Barnwell, John and Shen, Chia (2007): Lucid touch: a see-through mobile device. In: Proceedings of the ACM Symposium on User Interface Software and Technology October 7-10, 2007, Newport, Rhode Island, USA. pp. 269-278. Available online

Touch is a compelling input modality for interactive devices; however, touch input on the small screen of a mobile device is problematic because a user's fingers occlude the graphical elements he wishes to work with. In this paper, we present LucidTouch, a mobile device that addresses this limitation by allowing the user to control the application by touching the back of the device. The key to making this usable is what we call pseudo-transparency: by overlaying an image of the user's hands onto the screen, we create the illusion of the mobile device itself being semi-transparent. This pseudo-transparency allows users to accurately acquire targets while not occluding the screen with their fingers and hand. Lucid Touch also supports multi-touch input, allowing users to operate the device simultaneously with all 10 fingers. We present initial study results that indicate that many users found touching on the back to be preferable to touching on the front, due to reduced occlusion, higher precision, and the ability to make multi-finger input.

© All rights reserved Wigdor et al. and/or ACM Press

 
Edit | Del

Tse, Edward, Shen, Chia, Barnwell, John, Shipman, Sam, Leigh, Darren and Greenberg, Saul (2007): Multimodal Split View Tabletop Interaction Over Existing Applications. In: Second IEEE International Workshop on Horizontal Interactive Human-Computer Systems Tabletop 2007 October 10-12, 2007, Newport, Rhode Island, USA. pp. 129-136. Available online

 
Edit | Del

Wigdor, Daniel, Penn, Gerald, Ryall, Kathy, Esenther, Alan and Shen, Chia (2007): Living with a Tabletop: Analysis and Observations of Long Term Office Use of a Multi-Touch Table. In: Second IEEE International Workshop on Horizontal Interactive Human-Computer Systems Tabletop 2007 October 10-12, 2007, Newport, Rhode Island, USA. pp. 60-67. Available online

 
Edit | Del

Shen, Chia (2007): From Clicks to Touches: Enabling Face-to-Face Shared Social Interface on Multi-touch Tabletops. In: Schuler, Douglas (ed.) OCSC 2007 - Online Communities and Social Computing - Second International Conference July 22-27, 2007, Beijing, China. pp. 169-175. Available online

 
Edit | Del

Tse, Edward, Greenberg, Saul, Shen, Chia and Forlines, Clifton (2007): Multimodal multiplayer tabletop gaming. In Computers in Entertainment, 5 (2) . Available online

2006
 
Edit | Del

Wigdor, Daniel, Shen, Chia, Forlines, Clifton and Balakrishnan, Ravin (2006): Effects of display position and control space orientation on user preference and performance. In: Proceedings of ACM CHI 2006 Conference on Human Factors in Computing Systems 2006. pp. 309-318. Available online

In many environments, it is often the case that input is made to displays that are positioned non-traditionally relative to one or more users. This typically requires users to perform interaction tasks under transformed input-display spatial mappings, and the literature is unclear as to how such transformations affect performance. We present two experiments that explore the impact of display space position and input control space orientation on user's subjective preference and objective performance in a docking task. Our results provide guidelines as to optimal display placement and control orientation in collaborative computing environments with one or more shared displays.

© All rights reserved Wigdor et al. and/or ACM Press

 
Edit | Del

Forlines, Clifton, Shen, Chia, Wigdor, Daniel and Balakrishnan, Ravin (2006): Exploring the effects of group size and display configuration on visual search. In: Proceedings of ACM CSCW06 Conference on Computer-Supported Cooperative Work 2006. pp. 11-20. Available online

Visual search is the subject of countless psychology studies in which people search for target items within a scene. The bulk of this literature focuses on the individual with the goal of understanding the human perceptual system. In life, visual search is performed not only by individuals, but also by groups -- a team of doctors may study an x-ray and a team of analysts may study a satellite photograph. In this paper, we examine the issues one should consider when searching as a group. We present the details of an experiment designed to investigate the impact of group size on visual search performance, and how different display configurations affected that performance. We asked individuals, pairs, and groups of four people to participate in a baggage screening task in which these teams searched simulated x-rays for prohibited items. Teams conducted these searches on single monitors, a row of four monitors, and on a single horizontal display. Our findings suggest that groups commit far fewer errors in visual search tasks, although they may perform slower than individuals under certain conditions. The interaction between group size and display configuration turned out to be an important factor as well.

© All rights reserved Forlines et al. and/or ACM Press

 
Edit | Del

Tse, Edward, Greenberg, Saul and Shen, Chia (2006): SI Demo: Multiuser Gesture / Speech Interaction over Digital Tables by Wrapping Single User Applications. In: Quek, Francis and Yang, Jie (eds.) Proceedings of the International Conference on Multimodal Interfaces November 2-4, 2006, Banff, Canada. pp. 76-83. Available online

Tse, E., Greenberg, S., Shen C. (2006) GSI Demo: Multiuser Gesture / Speech Interaction over Digital Tables by Wrapping Single User Applications. Proceedings of the International Conference on Multimodal Interfaces, November 2, 2006, Banff, Canada

© All rights reserved Tse et al. and/or ACM Press

 
Edit | Del

Tse, Edward, Greenberg, Saul, Shen, Chia and Forlines, Clifton (2006): Multimodal Multiplayer Tabletop Gaming. In: Proceedings Third International Workshop on Pervasive Gaming Applications (PerGames06), in conjunction with 4th Intl. Conference on Pervasive Computing 2006. pp. 139-148. Available online

There is a large disparity between the rich physical interfaces of co-located arcade games and the generic input devices seen in most home console systems. In this paper we argue that a digital table is a conducive form factor for general co-located home gaming as it affords: (a) seating in collaboratively relevant positions that give all equal opportunity to reach into the surface and share a common view, (b) rich whole handed gesture input normally only seen when handling physical objects, (c) the ability to monitor how others use space and access objects on the surface, and (d) the ability to communicate to each other and interact atop the surface via gestures and verbal utterances. Our thesis is that multimodal gesture and speech input benefits collaborative interaction over such a digital table. To investigate this thesis, we designed a multimodal, multiplayer gaming environment that allows players to interact directly atop a digital table via speech and rich whole hand gestures. We transform two commercial single player computer games, representing a strategy and simulation game genre, to work within this setting.

© All rights reserved Tse et al. and/or their publisher

 
Edit | Del

Tse, Edward, Shen, Chia, Greenberg, Saul and Forlines, Clifton (2006): Enabling Interaction with Single User Applications through Speech and Gestures on a Multi-User Tabletop. In: Proceedings of Advanced Visual Interfaces (AVI06) May 23-26, 2006, Venezia, Italy. pp. 336-343. Available online

Tse, E., Shen, C., Greenberg, S. and Forlines, C. (2006) Enabling Interaction with Single User Applications through Speech and Gestures on a Multi-User Tabletop. Proceedings of Advanced Visual Interfaces (AVI'06), May 23-26, Venezia, Italy, ACM Press, 336 - 343.

© All rights reserved Tse et al. and/or ACM Press

 
Edit | Del

Wigdor, Daniel, Leigh, Darren, Forlines, Clifton, Shipman, Samuel, Barnwell, John, Balakrishnan, Ravin and Shen, Chia (2006): Under the table interaction. In: Proceedings of the ACM Symposium on User Interface Software and Technology 2006. pp. 259-268. Available online

We explore the design space of a two-sided interactive touch table, designed to receive touch input from both the top and bottom surfaces of the table. By combining two registered touch surfaces, we are able to offer a new dimension of input for co-located collaborative groupware. This design accomplishes the goal of increasing the relative size of the input area of a touch table while maintaining its direct-touch input paradigm. We describe the interaction properties of this two-sided touch table, report the results of a controlled experiment examining the precision of user touches to the underside of the table, and a series of application scenarios we developed for use on inverted and two-sided tables. Finally, we present a list of design recommendations based on our experiences and observations with inverted and two-sided tables.

© All rights reserved Wigdor et al. and/or ACM Press

 
Edit | Del

Forlines, Clifton, Esenther, Alan, Shen, Chia, Wigdor, Daniel and Ryall, Kathy (2006): Multi-user, multi-display interaction with a single-user, single-display geospatial application. In: Proceedings of the ACM Symposium on User Interface Software and Technology 2006. pp. 273-276. Available online

In this paper, we discuss our adaptation of a single-display, single-user commercial application for use in a multi-device, multi-user environment. We wrap Google Earth, a popular geospatial application, in a manner that allows for synchronized coordinated views among multiple instances running on different machines in the same co-located environment. The environment includes a touch-sensitive tabletop display, three vertical wall displays, and a TabletPC. A set of interaction techniques that allow a group to manage and exploit this collection of devices is presented.

© All rights reserved Forlines et al. and/or ACM Press

 
Edit | Del

Everitt, Katherine, Shen, Chia, Ryall, Kathy and Forlines, Clifton (2006): MultiSpace: Enabling Electronic Document Micro-mobility in Table-Centric, Multi-Device Environments. In: First IEEE International Workshop on Horizontal Interactive Human-Computer Systems Tabletop 2006 5-7 January, 2006, Adelaide, Australia. pp. 27-34. Available online

 
Edit | Del

Hancock, Mark S., Carpendale, M. Sheelagh T., Vernier, Frederic, Wigdor, Daniel and Shen, Chia (2006): Rotation and Translation Mechanisms for Tabletop Interaction. In: First IEEE International Workshop on Horizontal Interactive Human-Computer Systems Tabletop 2006 5-7 January, 2006, Adelaide, Australia. pp. 79-88. Available online

 
Edit | Del

Ryall, Kathy, Forlines, Clifton, Shen, Chia, Morris, Meredith Ringel and Everitt, Katherine (2006): Experiences with and Observations of Direct-Touch Tabletops. In: First IEEE International Workshop on Horizontal Interactive Human-Computer Systems Tabletop 2006 5-7 January, 2006, Adelaide, Australia. pp. 89-96. Available online

 
Edit | Del

Shen, Chia (2006): Multi-User Interface and Interactions on Direct-Touch Horizontal Surfaces: Collaborative Tabletop Research at MERL. In: First IEEE International Workshop on Horizontal Interactive Human-Computer Systems Tabletop 2006 5-7 January, 2006, Adelaide, Australia. pp. 53-54. Available online

 
Edit | Del

Wu, Mike, Shen, Chia, Ryall, Kathy, Forlines, Clifton and Balakrishnan, Ravin (2006): Gesture Registration, Relaxation, and Reuse for Multi-Point Direct-Touch Surfaces. In: First IEEE International Workshop on Horizontal Interactive Human-Computer Systems Tabletop 2006 5-7 January, 2006, Adelaide, Australia. pp. 185-192. Available online

 
Edit | Del

Tse, Edward, Greenberg, Saul and Shen, Chia (2006): GSI demo: multiuser gesture/speech interaction over digital tables by wrapping single user applications. In: Quek, Francis K. H., Yang, Jie, Massaro, Dominic W., Alwan, Abeer A. and Hazen, Timothy J. (eds.) Proceedings of the 8th International Conference on Multimodal Interfaces - ICMI 2006 November 2-4, 2006, Banff, Alberta, Canada. pp. 76-83. Available online

 
Edit | Del

Tse, Edward, Shen, Chia, Greenberg, Saul and Forlines, Clifton (2006): Enabling interaction with single user applications through speech and gestures on a multi-user tabletop. In: Celentano, Augusto (ed.) AVI 2006 - Proceedings of the working conference on Advanced visual interfaces May 23-26, 2006, Venezia, Italy. pp. 336-343. Available online

 
Edit | Del

Wigdor, Daniel, Shen, Chia, Forlines, Clifton and Balakrishnan, Ravin (2006): Table-centric interactive spaces for real-time collaboration. In: Celentano, Augusto (ed.) AVI 2006 - Proceedings of the working conference on Advanced visual interfaces May 23-26, 2006, Venezia, Italy. pp. 103-107. Available online

 
Edit | Del

Tse, Edward, Greenberg, Saul and Shen, Chia (2006): GSI demo: multiuser gesture/speech interaction over digital tables by wrapping single user applications. In: Proceedings of the 2006 International Conference on Multimodal Interfaces 2006. pp. 76-83. Available online

Most commercial software applications are designed for a single user using a keyboard/mouse over an upright monitor. Our interest is exploiting these systems so they work over a digital table. Mirroring what people do when working over traditional tables, we want to allow multiple people to interact naturally with the tabletop application and with each other via rich speech and hand gestures. In previous papers, we illustrated multi-user gesture and speech interaction on a digital table for geospatial applications -- Google Earth, Warcraft III and The Sims. In this paper, we describe our underlying architecture: GSI Demo. First, GSI Demo creates a run-time wrapper around existing single user applications: it accepts and translates speech and gestures from multiple people into a single stream of keyboard and mouse inputs recognized by the application. Second, it lets people use multimodal demonstration -- instead of programming -- to quickly map their own speech and gestures to these keyboard/mouse inputs. For example, continuous gestures are trained by saying "Computer, when I do [one finger gesture], you do [mouse drag]". Similarly, discrete speech commands can be trained by saying "Computer, when I say [layer bars], you do [keyboard and mouse macro]". The end result is that end users can rapidly transform single user commercial applications into a multi-user, multimodal digital tabletop system.

© All rights reserved Tse et al. and/or their publisher

 
Edit | Del

Ryall, Kathy, Esenther, Alan, Forlines, Clifton, Shen, Chia, Shipman, Sam, Morris, Meredith Ringel, Everitt, Katherine and Vernier, Frederic (2006): Identity-Differentiating Widgets for Multiuser Interactive Surfaces. In IEEE Computer Graphics and Applications, 26 (5) pp. 56-64. Available online

 
Edit | Del

Shen, Chia, Ryall, Kathy, Forlines, Clifton, Esenther, Alan, Vernier, Frederic, Everitt, Katherine, Wu, Mike, Wigdor, Daniel, Morris, Meredith Ringel, Hancock, Mark S. and Tse, Edward (2006): Informing the Design of Direct-Touch Tabletops. In IEEE Computer Graphics and Applications, 26 (5) pp. 36-46. Available online

2005
 
Edit | Del

Forlines, Clifton and Shen, Chia (2005): DTLens: multi-user tabletop spatial data exploration. In: Proceedings of the 2005 ACM Symposium on User Interface Software and Technology 2005. pp. 119-122. Available online

Supporting groups of individuals exploring large maps and design diagrams on interactive tabletops is still an open research problem. Today\'s geospatial, mechanical engineering and CAD design applications are mostly single-user, keyboard and mouse-based desktop applications. In this paper, we present the design of and experience with DTLens, a new zoom-in-context, multi-user, two-handed, multi-lens interaction technique that enables group exploration of spatial data with multiple individual lenses on the same direct-touch interactive tabletop. DTLens provides a set of consistent interactions on lens operations, thus minimizes tool switching by users during spatial data exploration.

© All rights reserved Forlines and Shen and/or ACM Press

 Cited in the following chapter:

Bifocal Display: [/encyclopedia/bifocal_display.html]


 
 Cited in the following chapter:

Bifocal Display: [/encyclopedia/bifocal_display.html]


 
 
Edit | Del

Hancock, Mark S., Shen, Chia, Forlines, Clifton and Ryall, Kathy (2005): Exploring non-speech auditory feedback at an interactive multi-user tabletop. In: Graphics Interface 2005 May 9-11, 2005, Victoria, British Columbia, Canada. pp. 41-50. Available online

We present two experiments on the use of non-speech audio at an interactive multi-touch, multi-user tabletop display. We first investigate the use of two categories of reactive auditory feedback: affirmative sounds that confirm user actions and negative sounds that indicate errors. Our results show that affirmative auditory feedback may improve one's awareness of group activity at the expense of one's awareness of his or her own activity. Negative auditory feedback may also improve group awareness, but simultaneously increase the perception of errors for both the group and the individual. In our second experiment, we compare two methods of associating sounds to individuals in a co-located environment. Specifically, we compare localized sound, where each user has his or her own speaker, to coded sound, where users share one speaker, but the waveform of the sounds are varied so that a different sound is played for each user. Results of this experiment reinforce the presence of tension between group awareness and individual focus found in the first experiment. User feedback suggests that users are more easily able to identify who caused a sound when either localized or coded sound is used, but that they are also more able to focus on their individual work. Our experiments show that, in general, auditory feedback can be used in co-located collaborative applications to support either individual work or group awareness, but not both simultaneously, depending on how it is presented.

© All rights reserved Hancock et al. and/or their publisher

 
Edit | Del

Everitt, Katherine, Shen, Chia, Ryall, Kathy and Forlines, Clifton (2005): Modal spaces: spatial multiplexing to mediate direct-touch input on large displays. In: Proceedings of ACM CHI 2005 Conference on Human Factors in Computing Systems 2005. pp. 1359-1362. Available online

We present a new interaction technique for large direct-touch displays called Modal Spaces. Modal interfaces require the user to keep track of the state of the system. The Modal Spaces technique adds screen location as an additional parameter of the interaction. Each modal region on the display supports a particular set of input actions and the visual background indicates the space's use. This "workbench approach" exploits the larger form factor of display. Our spatial multiplexing of the display supports a document-centric paradigm (as opposed to application-centric), enabling input gesture reuse, while complementing and enhancing the current existing practices of modal interfaces. We present a proof-of-concept system and discuss potential applications, design issues, and future research directions.

© All rights reserved Everitt et al. and/or ACM Press

 
Edit | Del

Forlines, Clifton and Shen, Chia (2005): Glimpse: a novel input model for multi-level devices. In: Proceedings of ACM CHI 2005 Conference on Human Factors in Computing Systems 2005. pp. 1375-1378. Available online

We describe a technique that supports the previewing of navigation, exploration, and editing operations by providing convenient Undo for unsuccessful and/or undesirable actions on multi-level input devices such as touch screens and pen-based computers. By adding a Glimpse state to traditional three-state pressure sensitive input devices, users are able to preview the effects of their editing without committing to them. From this Glimpse state, users can undo their action as easily as they can commit to it, making Glimpse most appropriate for systems in which the user is likely to try out many variations of an edit before finding the right one. Exploration is encouraged as the cumbersome returning to a menu or keyboard to issue an Undo command is eliminated. Glimpse has the added benefits that the negative effects of inconsistencies in the Undo feature within an application are reduced.

© All rights reserved Forlines and Shen and/or ACM Press

 
Edit | Del

Shen, Chia, Hancock, Mark S., Forlines, Clifton and Vernier, Frederic D. (2005): CoR{sup:2}Ds. In: Proceedings of ACM CHI 2005 Conference on Human Factors in Computing Systems 2005. pp. 1781-1784. Available online

We present a new popup widget, called CoR{sup:2}Ds (Context-Rooted Rotatable Draggables), designed for multi-user direct-touch tabletop environments. CoR{sup:2}Ds are interactive callout popup objects that are visually connected (rooted) at the originating displayed object by a semi-transparent colored swath. CoR{sup:2}Ds can be used to bring out menus, display drilled-down or off-screen ancillary data such as metadata and attributes, as well as instantiate tools. CoR{sup:2}Ds can be freely moved, rotated, and re-oriented on a tabletop display surface by fingers, hands, pointing devices (mice) or marking devices (such as a stylus or light pen). CoR{sup:2}Ds address five issues for interaction techniques on interactive tabletop display surfaces: occlusion, reach, context on a cluttered display, readability, and concurrent/coordinated multi-user interaction. In this paper, we present the design, interaction and implementation of CoR{sup:2}Ds. We also discuss a set of current usage scenarios.

© All rights reserved Shen et al. and/or ACM Press

2004
 
Edit | Del

Shen, Chia, Vernier, Frederic D., Forlines, Clifton and Ringel, Meredith (2004): DiamondSpin: an extensible toolkit for around-the-table interaction. In: Dykstra-Erickson, Elizabeth and Tscheligi, Manfred (eds.) Proceedings of ACM CHI 2004 Conference on Human Factors in Computing Systems April 24-29, 2004, Vienna, Austria. pp. 167-174. Available online

DiamondSpin is a toolkit for the efficient prototyping of and experimentation with multi-person, concurrent interfaces for interactive shared displays. In this paper, we identify the fundamental functionality that tabletop user interfaces should embody, then present the toolkit's architecture and API. DiamondSpin provides a novel real-time polar to Cartesian transformation engine that has enabled new, around-the-table interaction metaphors to be implemented. DiamondSpin allows arbitrary document positioning and orientation on a tabletop surface. Polygonal tabletop layouts such as rectangular, octagonal, and circular tabletops can easily be constructed. DiamondSpin also supports multiple work areas within the same digital tabletop. Multi-user operations are offered through multi-threaded input event streams, multiple active objects, and multiple concurrent menus. We also discuss insights on tabletop interaction issues we have observed from a set of applications built with DiamondSpin.

© All rights reserved Shen et al. and/or ACM Press

 
Edit | Del

Morris, Meredith Ringel, Ryall, Kathy, Shen, Chia, Forlines, Clifton and Vernier, Frederic (2004): Beyond "social protocols": multi-user coordination policies for co-located groupware. In: Proceedings of ACM CSCW04 Conference on Computer-Supported Cooperative Work 2004. pp. 262-265. Available online

The status quo for co-located groupware is to assume that "social protocols" (standards of polite behavior) are sufficient to coordinate the actions of a group of users; however, prior studies of groupware use as well as our own observations of groups using a shared tabletop display suggest potential for improving groupware interfaces by incorporating coordination policies - direct manipulation mechanisms for avoiding and resolving conflicts. We discuss our observations of group tabletop usage and present our coordination framework. We conclude with example usage scenarios and discuss future research suggested by this framework.

© All rights reserved Morris et al. and/or ACM Press

 
Edit | Del

Ryall, Kathy, Forlines, Clifton, Shen, Chia and Morris, Meredith Ringel (2004): Exploring the effects of group size and table size on interactions with tabletop shared-display groupware. In: Proceedings of ACM CSCW04 Conference on Computer-Supported Cooperative Work 2004. pp. 284-293. Available online

Interactive tabletops have been previously proposed and studied in the domain of co-located group applications. However, little fundamental research has been done to explore the issue of size. In this paper we identify a number of size considerations for tabletop design, and present an experiment to explore some of these issues, in particular the effects of group size and table size on the speed at which the task was performed, the distribution of work among group members, issues of shared resources, and user preference for table size. Our findings shed light on (1) how work strategies are affected by group size, (2) how social interaction varies with respect to table size, and (3) how the speed of task performance is influenced by group size but not by table size. In addition, our experiments revealed that for larger groups, designers might need to add additional vertical displays for shared information. This finding opens the door for extending single-display groupware to shared-display groupware settings that involve multiple, shared displays.

© All rights reserved Ryall et al. and/or ACM Press

2003
 
Edit | Del

Shen, Chia, Lesh, Neal and Vernier, Frederic D. (2003): Personal digital historian: story sharing around the table. In Interactions, 10 (2) pp. 15-22.

 
Edit | Del

Shen, Chia, Everitt, Katherine and Ryall, Kathleen (2003): UbiTable: Impromptu Face-to-Face Collaboration on Horizontal Interactive Surfaces. In: Dey, Anind K., Schmidt, Albrecht and McCarthy, Joseph F. (eds.) UbiComp 2003 Ubiquitous Computing - 5th International Conference October 12-15, 2003, Seattle, WA, USA. pp. 281-288. Available online

2002
 
Edit | Del

Shen, Chia, Lesh, Neal, Vernier, Frederic D., Forlines, Clifton and Frost, Jeana (2002): Sharing and building digital group histories. In: Churchill, Elizabeth F., McCarthy, Joe, Neuwirth, Christine and Rodden, Tom (eds.) Proceedings of the 2002 ACM conference on Computer supported cooperative work November 16 - 20, 2002, New Orleans, Louisiana, USA. pp. 324-333. Available online

Organizations, families, institutions evolve a shared culture and history. In this work, we describe a system to facilitate conversation and storytelling about this collective past. Users explore digital archives of shared materials such as photographs, video, and text documents on a tabletop interface. Both the software and the interface encourage natural conversation and reflection. This work is an application of our ongoing research on systems for multiple, co-present users to explore digital collections. In this paper, we present a case study of our own group history along with the software extensions developed for this scenario. These extensions include methods for easily branching off from and returning to previous threads of the exploration, incorporating background contexts that support a variety of view points and flexible story sharing, and supporting the active and passive discovery of relevant information.

© All rights reserved Shen et al. and/or ACM Press

 
Add publication
Show list on your website
 
 

Join our community and advance:

Your
Skills

Your
Network

Your
Career

 
Join our community!
 
 
 

Page Information

Page maintainer: The Editorial Team
URL: http://www.interaction-design.org/references/authors/chia_shen.html