Publication statistics

Pub. period:2003-2011
Pub. count:20
Number of co-authors:28



Co-authors

Number of publications with 3 favourite co-authors:

Saul Greenberg:16
Chia Shen:12
Clifton Forlines:7

 

 

Productive colleagues

Edward Tse's 3 most productive colleagues in number of publications:

Saul Greenberg:140
Scott E. Hudson:113
Yvonne Rogers:99
 
 
 

Upcoming Courses

go to course
Emotional Design: How to make products people will love
Starts tomorrow LAST CALL!
go to course
UI Design Patterns for Successful Software
86% booked. Starts in 9 days
 
 

Featured chapter

Marc Hassenzahl explains the fascinating concept of User Experience and Experience Design. Commentaries by Don Norman, Eric Reiss, Mark Blythe, and Whitney Hess

User Experience and Experience Design !

 
 

Our Latest Books

 
 
The Social Design of Technical Systems: Building technologies for communities. 2nd Edition
by Brian Whitworth and Adnan Ahmad
start reading
 
 
 
 
Gamification at Work: Designing Engaging Business Software
by Janaki Mythily Kumar and Mario Herger
start reading
 
 
 
 
The Social Design of Technical Systems: Building technologies for communities
by Brian Whitworth and Adnan Ahmad
start reading
 
 
 
 
The Encyclopedia of Human-Computer Interaction, 2nd Ed.
by Mads Soegaard and Rikke Friis Dam
start reading
 
 

Edward Tse

Picture of Edward Tse.
Update pic
Personal Homepage:
http://www.EdwardTse.com

PhD Topic - Multimodal Co-located Interaction Edward Tse is focused on supporting people's natural interactions over digital surfaces such as large tables and wall displays. Application areas include tabletop gaming, military command and control, air traffic control and hospital emergency rooms. To see a brief video of his work visit www.EdwardTse.com Interaction on a digital table supports face to face collaboration with the added benefits of digital displays (e.g., real time updates, access to the Internet and rich satellite imagery). Core features of my work include rich whole handed bimanual gestures, speech and gesture input, and multi user interaction.

 

Publications by Edward Tse (bibliography)

 what's this?
2011
 
Edit | Del

Tse, Edward, Schoning, Johannes, Huber, Jochen, Marentette, Lynn, Beckwith, Richard, Rogers, Yvonne and Mhlhuser, Max (2011): Child computer interaction: workshop on UI technologies and educational pedagogy. In: Proceedings of ACM CHI 2011 Conference on Human Factors in Computing Systems 2011. pp. 2445-2448. Available online

Given the growth of Child Computer Interaction research, next generation HCI technologies play an important role in the future of education. Educators rely on technology to improve and adapt learning to the pedagogical needs of learners. Hence, this community needs to understand how current technology concepts match with current pedagogical paradigms. The classroom is a high stakes environment for experimentation, thus new interaction techniques need to be validated to prove their pedagogical value in the educational setting. This workshop provides a forum to discuss key HCI issues facing next generation education. With a particular focus on child computer interaction, these issues comprise inter alia the interaction with whole class interactive whiteboards, small group interactive multi-touch tables, and individual personal response systems (e.g. mobile devices) in the classroom.

© All rights reserved Tse et al. and/or their publisher

2010
 
Edit | Del

Tse, Edward, Schoning, Johannes, Rogers, Yvonne, Shen, Chia and Morrison, Gerald (2010): Next generation of HCI and education: workshop on UI technologies and educational pedagogy. In: Proceedings of ACM CHI 2010 Conference on Human Factors in Computing Systems 2010. pp. 4509-4512. Available online

Given the exponential growth of interactive whiteboards in classrooms around the world, and the recent emergence of multi-touch tables, tangible computing devices and mobile devices, there has been a need to explore how next generation HCI will impact education in the future. Educators are depending on the interaction communities to deliver technologies that will improve/adapt learning to an ever-changing world. In addition to novel UI concepts, the HCI community needs to examine how these concepts can be matched to contemporary paradigms in Educational pedagogy. The classroom is a challenging environment for evaluation, thus new interaction techniques need to be established to prove the value of new HCI interactions in the educational space. This workshop provides a forum to discuss key HCI issues facing next generation education ranging from whole class interactive whiteboards, small group interactive multi-touch tables, and individual personal response systems in the classroom.

© All rights reserved Tse et al. and/or their publisher

2008
 
Edit | Del

Tse, Edward, Greenberg, Saul, Shen, Chia, Forlines, Clifton and Kodama, Ryo (2008): Exploring true multi-user multimodal interaction over a digital table. In: Proceedings of DIS08 Designing Interactive Systems 2008. pp. 109-118. Available online

True multi-user, multimodal interaction over a digital table lets co-located people simultaneously gesture and speak commands to control an application. We explore this design space through a case study, where we implemented an application that supports the KJ creativity method as used by industrial designers. Four key design issues emerged that have a significant impact on how people would use such a multi-user multimodal system. First, parallel work is affected by the design of multimodal commands. Second, individual mode switches can be confusing to collaborators, especially if speech commands are used. Third, establishing personal and group territories can hinder particular tasks that require artefact neutrality. Finally, timing needs to be considered when designing joint multimodal commands. We also describe our model view controller architecture for true multi-user multimodal interaction.

© All rights reserved Tse et al. and/or ACM Press

 
Edit | Del

Lee, Johnny C., Hudson, Scott E. and Tse, Edward (2008): Foldable interactive displays. In: Cousins, Steve B. and Beaudouin-Lafon, Michel (eds.) Proceedings of the 21st Annual ACM Symposium on User Interface Software and Technology October 19-22, 2008, Monterey, CA, USA. pp. 287-290. Available online

2007
 
Edit | Del

Tse, Edward, Shen, Chia, Greenberg, Saul and Forlines, Clifton (2007): How pairs interact over a multimodal digital table. In: Proceedings of ACM CHI 2007 Conference on Human Factors in Computing Systems 2007. pp. 215-218. Available online

Co-located collaborators often work over physical tabletops using combinations of expressive hand gestures and verbal utterances. This paper provides the first observations of how pairs of people communicated and interacted in a multimodal digital table environment built atop existing single user applications. We contribute to the understanding of these environments in two ways. First, we saw that speech and gesture commands served double duty as both commands to the computer, and as implicit communication to others. Second, in spite of limitations imposed by the underlying single-user application, people were able to work together simultaneously, and they performed interleaving acts: the graceful mixing of inter-person speech and gesture actions as commands to the system. This work contributes to the intricate understanding of multi-user multimodal digital table interaction.

© All rights reserved Tse et al. and/or ACM Press

 
Edit | Del

Tse, Edward, Shen, Chia, Barnwell, John, Shipman, Sam, Leigh, Darren and Greenberg, Saul (2007): Multimodal Split View Tabletop Interaction Over Existing Applications. In: Second IEEE International Workshop on Horizontal Interactive Human-Computer Systems Tabletop 2007 October 10-12, 2007, Newport, Rhode Island, USA. pp. 129-136. Available online

 
Edit | Del

Tse, Edward, Hancock, Mark S. and Greenberg, Saul (2007): Speech-filtered bubble ray: improving target acquisition on display walls. In: Massaro, Dominic W., Takeda, Kazuya, Roy, Deb and Potamianos, Alexandros (eds.) Proceedings of the 9th International Conference on Multimodal Interfaces - ICMI 2007 November 12-15, 2007, Nagoya, Aichi, Japan. pp. 307-314. Available online

 
Edit | Del

Tse, Edward, Greenberg, Saul, Shen, Chia and Forlines, Clifton (2007): Multimodal multiplayer tabletop gaming. In Computers in Entertainment, 5 (2) . Available online

 
Edit | Del

Tse, Edward, Hancock, Mark and Greenberg, Saul (2007): Speech-filtered bubble ray: improving target acquisition on display walls. In: Proceedings of the 2007 International Conference on Multimodal Interfaces 2007. pp. 307-314. Available online

The rapid development of large interactive wall displays has been accompanied by research on methods that allow people to interact with the display at a distance. The basic method for target acquisition is by ray casting a cursor from one's pointing finger or hand position; the problem is that selection is slow and error-prone with small targets. A better method is the bubble cursor that resizes the cursor's activation area to effectively enlarge the target size. The catch is that this technique's effectiveness depends on the proximity of surrounding targets: while beneficial in sparse spaces, it is less so when targets are densely packed together. Our method is the speech-filtered bubble ray that uses speech to transform a dense target space into a sparse one. Our strategy builds on what people already do: people pointing to distant objects in a physical workspace typically disambiguate their choice through speech. For example, a person could point to a stack of books and say "the green one". Gesture indicates the approximate location for the search, and speech 'filters' unrelated books from the search. Our technique works the same way; a person specifies a property of the desired object, and only the location of objects matching that property trigger the bubble size. In a controlled evaluation, people were faster and preferred using the speech-filtered bubble ray over the standard bubble ray and ray casting approach.

© All rights reserved Tse et al. and/or their publisher

2006
 
Edit | Del

Tse, Edward, Greenberg, Saul and Shen, Chia (2006): SI Demo: Multiuser Gesture / Speech Interaction over Digital Tables by Wrapping Single User Applications. In: Quek, Francis and Yang, Jie (eds.) Proceedings of the International Conference on Multimodal Interfaces November 2-4, 2006, Banff, Canada. pp. 76-83. Available online

Tse, E., Greenberg, S., Shen C. (2006) GSI Demo: Multiuser Gesture / Speech Interaction over Digital Tables by Wrapping Single User Applications. Proceedings of the International Conference on Multimodal Interfaces, November 2, 2006, Banff, Canada

© All rights reserved Tse et al. and/or ACM Press

 
Edit | Del

Tse, Edward, Greenberg, Saul, Shen, Chia and Forlines, Clifton (2006): Multimodal Multiplayer Tabletop Gaming. In: Proceedings Third International Workshop on Pervasive Gaming Applications (PerGames06), in conjunction with 4th Intl. Conference on Pervasive Computing 2006. pp. 139-148. Available online

There is a large disparity between the rich physical interfaces of co-located arcade games and the generic input devices seen in most home console systems. In this paper we argue that a digital table is a conducive form factor for general co-located home gaming as it affords: (a) seating in collaboratively relevant positions that give all equal opportunity to reach into the surface and share a common view, (b) rich whole handed gesture input normally only seen when handling physical objects, (c) the ability to monitor how others use space and access objects on the surface, and (d) the ability to communicate to each other and interact atop the surface via gestures and verbal utterances. Our thesis is that multimodal gesture and speech input benefits collaborative interaction over such a digital table. To investigate this thesis, we designed a multimodal, multiplayer gaming environment that allows players to interact directly atop a digital table via speech and rich whole hand gestures. We transform two commercial single player computer games, representing a strategy and simulation game genre, to work within this setting.

© All rights reserved Tse et al. and/or their publisher

 
Edit | Del

Tse, Edward, Shen, Chia, Greenberg, Saul and Forlines, Clifton (2006): Enabling Interaction with Single User Applications through Speech and Gestures on a Multi-User Tabletop. In: Proceedings of Advanced Visual Interfaces (AVI06) May 23-26, 2006, Venezia, Italy. pp. 336-343. Available online

Tse, E., Shen, C., Greenberg, S. and Forlines, C. (2006) Enabling Interaction with Single User Applications through Speech and Gestures on a Multi-User Tabletop. Proceedings of Advanced Visual Interfaces (AVI'06), May 23-26, Venezia, Italy, ACM Press, 336 - 343.

© All rights reserved Tse et al. and/or ACM Press

 
Edit | Del

Tse, Edward, Greenberg, Saul and Shen, Chia (2006): GSI demo: multiuser gesture/speech interaction over digital tables by wrapping single user applications. In: Quek, Francis K. H., Yang, Jie, Massaro, Dominic W., Alwan, Abeer A. and Hazen, Timothy J. (eds.) Proceedings of the 8th International Conference on Multimodal Interfaces - ICMI 2006 November 2-4, 2006, Banff, Alberta, Canada. pp. 76-83. Available online

 
Edit | Del

Tse, Edward, Shen, Chia, Greenberg, Saul and Forlines, Clifton (2006): Enabling interaction with single user applications through speech and gestures on a multi-user tabletop. In: Celentano, Augusto (ed.) AVI 2006 - Proceedings of the working conference on Advanced visual interfaces May 23-26, 2006, Venezia, Italy. pp. 336-343. Available online

 
Edit | Del

Tse, Edward, Greenberg, Saul and Shen, Chia (2006): GSI demo: multiuser gesture/speech interaction over digital tables by wrapping single user applications. In: Proceedings of the 2006 International Conference on Multimodal Interfaces 2006. pp. 76-83. Available online

Most commercial software applications are designed for a single user using a keyboard/mouse over an upright monitor. Our interest is exploiting these systems so they work over a digital table. Mirroring what people do when working over traditional tables, we want to allow multiple people to interact naturally with the tabletop application and with each other via rich speech and hand gestures. In previous papers, we illustrated multi-user gesture and speech interaction on a digital table for geospatial applications -- Google Earth, Warcraft III and The Sims. In this paper, we describe our underlying architecture: GSI Demo. First, GSI Demo creates a run-time wrapper around existing single user applications: it accepts and translates speech and gestures from multiple people into a single stream of keyboard and mouse inputs recognized by the application. Second, it lets people use multimodal demonstration -- instead of programming -- to quickly map their own speech and gestures to these keyboard/mouse inputs. For example, continuous gestures are trained by saying "Computer, when I do [one finger gesture], you do [mouse drag]". Similarly, discrete speech commands can be trained by saying "Computer, when I say [layer bars], you do [keyboard and mouse macro]". The end result is that end users can rapidly transform single user commercial applications into a multi-user, multimodal digital tabletop system.

© All rights reserved Tse et al. and/or their publisher

 
Edit | Del

Shen, Chia, Ryall, Kathy, Forlines, Clifton, Esenther, Alan, Vernier, Frederic, Everitt, Katherine, Wu, Mike, Wigdor, Daniel, Morris, Meredith Ringel, Hancock, Mark S. and Tse, Edward (2006): Informing the Design of Direct-Touch Tabletops. In IEEE Computer Graphics and Applications, 26 (5) pp. 36-46. Available online

2004
 
Edit | Del

Tse, Edward and Greenberg, Saul (2004): Rapidly Prototyping Single Display Groupware through the SDGToolkit. In: Proceedings of the Fifth Australasian User Interface Conference, Volume 28 in the CRPIT Conferences in Research and Practice in Information Technology Series January, 2004, Dunedin, NZ. pp. 101-110. Available online

Researchers in Single Display Groupware (SDG) explore how multiple users share a single display such as a computer monitor, a large wall display, or an electronic tabletop display. Yet today's personal computers are designed with the assumption that one person interacts with the display at a time. Thus researchers and programmers face considerable hurdles if they wish to develop SDG. Our solution is the SDGToolkit, a toolkit for rapidly prototyping SDG. SDGToolkit automatically captures and manages multiple mice and keyboards, and presents them to the programmer as uniquely identified input events relative to either the whole screen or a particular window. It transparently provides multiple cursors, one for each mouse. To handle orientation issues for tabletop displays (i.e., people seated across from one another), programmers can specify a participant's seating angle, which automatically rotates the cursor and translates input coordinates so the mouse behaves correctly. Finally, SDGToolkit provides an SDG-aware widget class layer that significantly eases how programmers create novel graphical components that recognize and respond to multiple inputs.

© All rights reserved Tse and Greenberg and/or ACM Press

 
Edit | Del

Tse, Edward, Histon, Jonathan, Scott, Stacey and Greenberg, Saul (2004): Avoiding Interference: How People Use Spatial Separation and Partitioning in SDG Workspaces. In: Proceedings of the 2004 ACM Conference on Computer Supported Cooperative Work November 6-10, 2004, Chicago, Illinois, USA. pp. 252-261. Available online

Single Display Groupware (SDG) lets multiple co-located people, each with their own input device, interact simultaneously over a single communal display. While SDG is beneficial, there is risk of interference: when two people are interacting in close proximity, one person can raise an interface component (such as a menu, dialog box, or movable palette) over another person's working area, thus obscuring and hindering the other's actions. Consequently, researchers have developed special purpose interaction components to mitigate interference techniques. Yet is interference common in practice? If not, then SDG versions of conventional interface components could prove more suitable. We hypothesize that collaborators spatially separate their activities to the extent that they partition their workspace into distinct areas when working on particular tasks, thus reducing the potential for interference. We tested this hypothesis by observing co-located people performing a set of collaborative drawing exercises in an SDG workspace, where we paid particular attention to the locations of their simultaneous interactions. We saw that spatial separation and partitioning occurred consistently and naturally across all participants, rarely requiring any verbal negotiation. Particular divisions of the space varied, influenced by seating position and task semantics. These results suggest that people naturally avoid interfering with one another by spatially separating their actions. This has design implications for SDG interaction techniques, especially in how conventional widgets can be adapted to an SDG setting.

© All rights reserved Tse et al. and/or ACM Press

 
Edit | Del

Tse, Edward and Greenberg, Saul (2004): Rapidly Prototyping Single Display Groupware through the SDGToolkit. In: Cockburn, Andy (ed.) AUIC2004 - User Interfaces 2004 - Fifth Australasian User Interface Conference 18-22 January, 2004, Dunedin, New Zealand. pp. 101-110. Available online

2003
 
Edit | Del

Diaz-Marino, Rob, Tse, Edward and Greenberg, Saul (2003): Programming for Multiple Touches and Multiple Users: A Toolkit for the DiamondTouch Hardware. In: Companion Proceedings of ACM UIST03 Conference on User Interface Software and Technology 2003, Vancouver, BC, Canada. . Available online

Diaz-Marino, R.A., Tse, E, and Greenberg, S. (2003) Programming for Multiple Touches and Multiple Users: A Toolkit for the DiamondTouch Hardware. Companion Proceedings of ACM UIST'03 Conference on User Interface Software and Technology.

© All rights reserved Diaz-Marino et al. and/or their publisher

 
Add publication
Show list on your website
 
 

Join our community and advance:

Your
Skills

Your
Network

Your
Career

 
Join our community!
 
 
 

Page Information

Page maintainer: The Editorial Team
URL: http://www.interaction-design.org/references/authors/edward_tse.html