Publication statistics

Pub. period:2006-2011
Pub. count:15
Number of co-authors:24



Co-authors

Number of publications with 3 favourite co-authors:

James A. Landay:9
Desney S. Tan:4
Dan Morris:4

 

 

Productive colleagues

T. Scott Saponas's 3 most productive colleagues in number of publications:

Gregory D. Abowd:116
Ravin Balakrishnan:108
James A. Landay:91
 
 
 

Upcoming Courses

go to course
User-Centred Design - Module 3
71% booked. Starts in 24 days
 
 

Featured chapter

Marc Hassenzahl explains the fascinating concept of User Experience and Experience Design. Commentaries by Don Norman, Eric Reiss, Mark Blythe, and Whitney Hess

User Experience and Experience Design !

 
 

Our Latest Books

 
 
The Social Design of Technical Systems: Building technologies for communities. 2nd Edition
by Brian Whitworth and Adnan Ahmad
start reading
 
 
 
 
Gamification at Work: Designing Engaging Business Software
by Janaki Mythily Kumar and Mario Herger
start reading
 
 
 
 
The Social Design of Technical Systems: Building technologies for communities
by Brian Whitworth and Adnan Ahmad
start reading
 
 
 
 
The Encyclopedia of Human-Computer Interaction, 2nd Ed.
by Mads Soegaard and Rikke Friis Dam
start reading
 
 

T. Scott Saponas

Add description
Rename / change spelling
Add publication
 

Publications by T. Scott Saponas (bibliography)

 what's this?
2011
 
Edit | Del

Saponas, T. Scott, Harrison, Chris and Benko, Hrvoje (2011): PocketTouch: through-fabric capacitive touch input. In: Proceedings of the 2011 ACM Symposium on User Interface Software and Technology 2011. pp. 303-308.

PocketTouch is a capacitive sensing prototype that enables eyes-free multitouch input on a handheld device without having to remove the device from the pocket of one's pants, shirt, bag, or purse. PocketTouch enables a rich set of gesture interactions, ranging from simple touch strokes to full alphanumeric text entry. Our prototype device consists of a custom multitouch capacitive sensor mounted on the back of a smartphone. Similar capabilities could be enabled on most existing capacitive touchscreens through low-level access to the capacitive sensor. We demonstrate how touch strokes can be used to initialize the device for interaction and how strokes can be processed to enable text recognition of characters written over the same physical area. We also contribute a comparative study that empirically measures how different fabrics attenuate touch inputs, providing insight for future investigations. Our results suggest that PocketTouch will work reliably with a wide variety of fabrics used in today's garments, and is a viable input method for quick eyes-free operation of devices in pockets.

© All rights reserved Saponas et al. and/or ACM Press

2010
 
Edit | Del

Saponas, T. Scott, Tan, Desney S., Morris, Dan, Turner, Jim and Landay, James A. (2010): Making muscle-computer interfaces more practical. In: Proceedings of ACM CHI 2010 Conference on Human Factors in Computing Systems 2010. pp. 851-854.

Recent work in muscle sensing has demonstrated the potential of human-computer interfaces based on finger gestures sensed from electrodes on the upper forearm. While this approach holds much potential, previous work has given little attention to sensing finger gestures in the context of three important real-world requirements: sensing hardware suitable for mobile and off-desktop environments, electrodes that can be put on quickly without adhesives or gel, and gesture recognition techniques that require no new training or calibration after re-donning a muscle-sensing armband. In this note, we describe our approach to overcoming these challenges, and we demonstrate average classification accuracies as high as 86% for pinching with one of three fingers in a two-session, eight-person experiment.

© All rights reserved Saponas et al. and/or their publisher

2009
 
Edit | Del

Saponas, T. Scott (2009): Enabling always-available input: through on-body interfaces. In: Proceedings of ACM CHI 2009 Conference on Human Factors in Computing Systems 2009. pp. 3117-3120.

Most current input devices require dedicated attention by our hands through physical transducers such as the keys on a keyboard. Similarly, computer output is dominated by visual displays requiring most of our visual attention. While keyboards and monitors are effective I/O devices for dedicated computing activities, when our primary task is not using a computer, our hands may not be free to manipulate an input device. The goal of my dissertation is to explore how on-body interfaces can better support computing in our everyday activities.

© All rights reserved Saponas and/or ACM Press

 
Edit | Del

Benko, Hrvoje, Saponas, T. Scott, Morris, Dan and Tan, Desney (2009): Enhancing input on and above the interactive surface with muscle sensing. In: Proceedings of the 2009 ACM International Conference on Interactive Tabletops and Surfaces 2009. pp. 93-100.

Current interactive surfaces provide little or no information about which fingers are touching the surface, the amount of pressure exerted, or gestures that occur when not in contact with the surface. These limitations constrain the interaction vocabulary available to interactive surface systems. In our work, we extend the surface interaction space by using muscle sensing to provide complementary information about finger movement and posture. In this paper, we describe a novel system that combines muscle sensing with a multi-touch tabletop, and introduce a series of new interaction techniques enabled by this combination. We present observations from an initial system evaluation and discuss the limitations and challenges of utilizing muscle sensing for tabletop applications.

© All rights reserved Benko et al. and/or their publisher

 
Edit | Del

Saponas, T. Scott, Tan, Desney S., Morris, Dan, Balakrishnan, Ravin, Turner, Jim and Landay, James A. (2009): Enabling always-available input with muscle-computer interfaces. In: Proceedings of the ACM Symposium on User Interface Software and Technology 2009. pp. 167-176.

Previous work has demonstrated the viability of applying offline analysis to interpret forearm electromyography (EMG) and classify finger gestures on a physical surface. We extend those results to bring us closer to using muscle-computer interfaces for always-available input in real-world applications. We leverage existing taxonomies of natural human grips to develop a gesture set covering interaction in free space even when hands are busy with other objects. We present a system that classifies these gestures in real-time and we introduce a bi-manual paradigm that enables use in interactive systems. We report experimental results demonstrating four-finger classification accuracies averaging 79% for pinching, 85% while holding a travel mug, and 88% when carrying a weighted bag. We further show generalizability across different arm postures and explore the tradeoffs of providing real-time visual feedback.

© All rights reserved Saponas et al. and/or their publisher

 
Edit | Del

Saponas, T. Scott, Kelly, Daniel, Parviz, Babak A. and Tan, Desney S. (2009): Optically sensing tongue gestures for computer input. In: Proceedings of the ACM Symposium on User Interface Software and Technology 2009. pp. 177-180.

Many patients with paralyzing injuries or medical conditions retain the use of their cranial nerves, which control the eyes, jaw, and tongue. While researchers have explored eye-tracking and speech technologies for these patients, we believe there is potential for directly sensing explicit tongue movement for controlling computers. In this paper, we describe a novel approach of using infrared optical sensors embedded within a dental retainer to sense tongue gestures. We describe an experiment showing our system effectively discriminating between four simple gestures with over 90% accuracy. In this experiment, users were also able to play the popular game Tetris with their tongues. Finally, we present lessons learned and opportunities for future work.

© All rights reserved Saponas et al. and/or their publisher

2008
 
Edit | Del

Saponas, T. Scott, Tan, Desney S., Morris, Dan and Balakrishnan, Ravin (2008): Demonstrating the feasibility of using forearm electromyography for muscle-computer interfaces. In: Proceedings of ACM CHI 2008 Conference on Human Factors in Computing Systems April 5-10, 2008. pp. 515-524.

We explore the feasibility of muscle-computer interfaces (muCIs): an interaction methodology that directly senses and decodes human muscular activity rather than relying on physical device actuation or user actions that are externally visible or audible. As a first step towards realizing the mu-CI concept, we conducted an experiment to explore the potential of exploiting muscular sensing and processing technologies for muCIs. We present results demonstrating accurate gesture classification with an off-the-shelf electromyography (EMG) device. Specifically, using 10 sensors worn in a narrow band around the upper forearm, we were able to differentiate position and pressure of finger presses, as well as classify tapping and lifting gestures across all five fingers. We conclude with discussion of the implications of our results for future muCI designs.

© All rights reserved Saponas et al. and/or ACM Press

 
Edit | Del

Harada, Susumu, Lester, Jonathan, Patel, Kayur, Saponas, T. Scott, Fogarty, James, Landay, James A. and Wobbrock, Jacob O. (2008): VoiceLabel: using speech to label mobile sensor data. In: Digalakis, Vassilios, Potamianos, Alexandros, Turk, Matthew, Pieraccini, Roberto and Ivanov, Yuri (eds.) Proceedings of the 10th International Conference on Multimodal Interfaces - ICMI 2008 October 20-22, 2008, Chania, Crete, Greece. pp. 69-76.

 
Edit | Del

Harada, Susumu, Lester, Jonathan, Patel, Kayur, Saponas, T. Scott, Fogarty, James, Landay, James A. and Wobbrock, Jacob O. (2008): VoiceLabel: using speech to label mobile sensor data. In: Proceedings of the 2008 International Conference on Multimodal Interfaces 2008. pp. 69-76.

Many mobile machine learning applications require collecting and labeling data, and a traditional GUI on a mobile device may not be an appropriate or viable method for this task. This paper presents an alternative approach to mobile labeling of sensor data called VoiceLabel. VoiceLabel consists of two components: (1) a speech-based data collection tool for mobile devices, and (2) a desktop tool for offline segmentation of recorded data and recognition of spoken labels. The desktop tool automatically analyzes the audio stream to find and recognize spoken labels, and then presents a multimodal interface for reviewing and correcting data labels using a combination of the audio stream, the system's analysis of that audio, and the corresponding mobile sensor data. A study with ten participants showed that VoiceLabel is a viable method for labeling mobile sensor data. VoiceLabel also illustrates several key features that inform the design of other data labeling tools.

© All rights reserved Harada et al. and/or their publisher

2007
 
Edit | Del

Davis, Richard C., Saponas, T. Scott, Shilman, Michael and Landay, James A. (2007): SketchWizard: Wizard of Oz prototyping of pen-based user interfaces. In: Proceedings of the ACM Symposium on User Interface Software and Technology October 7-10, 2007, Newport, Rhode Island, USA. pp. 119-128.

SketchWizard allows designers to create Wizard of Oz prototypes of pen-based user interfaces in the early stages of design. In the past, designers have been inhibited from participating in the design of pen-based interfaces because of the inadequacy of paper prototypes and the difficulty of developing functional prototypes. In SketchWizard, designers and end users share a drawing canvas between two computers, allowing the designer to simulate the behavior of recognition or other technologies. Special editing features are provided to help designers respond quickly to end-user input. This paper describes the SketchWizard system and presents two evaluations of our approach. The first is an early feasibility study in which Wizard of Oz was used to prototype a pen-based user interface. The second is a laboratory study in which designers used SketchWizard to simulate existing pen-based interfaces. Both showed that end users gave valuable feedback in spite of delays between end-user actions and wizard updates.

© All rights reserved Davis et al. and/or ACM Press

 
Edit | Del

Harada, Susumu, Saponas, T. Scott and Landay, James A. (2007): Voicepen: augmenting pen input with simultaneous non-linguisitic vocalization. In: Massaro, Dominic W., Takeda, Kazuya, Roy, Deb and Potamianos, Alexandros (eds.) Proceedings of the 9th International Conference on Multimodal Interfaces - ICMI 2007 November 12-15, 2007, Nagoya, Aichi, Japan. pp. 178-185.

 
Edit | Del

Harada, Susumu, Saponas, T. Scott and Landay, James A. (2007): VoicePen: augmenting pen input with simultaneous non-linguistic vocalization. In: Proceedings of the 2007 International Conference on Multimodal Interfaces 2007. pp. 178-185.

This paper explores using non-linguistic vocalization as an additional modality to augment digital pen input on a tablet computer. We investigated this through a set of novel interaction techniques and a feasibility study. Typically, digital pen users control one or two parameters using stylus position and sometimes pen pressure. However, in many scenarios the user can benefit from the ability to continuously vary additional parameters. Non-linguistic vocalizations, such as vowel sounds, variation of pitch, or control of loudness have the potential to provide fluid continuous input concurrently with pen interaction. We present a set of interaction techniques that leverage the combination of voice and pen input when performing both creative drawing and object manipulation tasks. Our feasibility evaluation suggests that with little training people can use non-linguistic vocalization to productively augment digital pen interaction.

© All rights reserved Harada et al. and/or their publisher

2006
 
Edit | Del

Saponas, T. Scott, Prabaker, Madhu K., Abowd, Gregory D. and Landay, James A. (2006): The impact of pre-patterns on the design of digital home applications. In: Proceedings of DIS06: Designing Interactive Systems: Processes, Practices, Methods, & Techniques 2006. pp. 189-198.

Recent research suggests design pre-patterns, structured collections of evidence-based research and design knowledge, provide a useful resource for design activities in emerging application domains. This paper extends previous research by exploring the impact of pre-patterns and tools to support pre-pattern exploration for the domain of ubiquitous computing in the home. We conducted an empirical study of 44 designers engaged in a two hour concentrated brainstorming and design task for the home of the future. Our results show pre-patterns are an easily adopted resource for designers that can impact even the earliest of design activities. We also provide insights for future development of pre-patterns based on designer feedback.

© All rights reserved Saponas et al. and/or ACM Press

 
Edit | Del

Dow, Steven, Saponas, T. Scott, Li, Yang and Landay, James A. (2006): External representations in ubiquitous computing design and the implications for design tools. In: Proceedings of DIS06: Designing Interactive Systems: Processes, Practices, Methods, & Techniques 2006. pp. 241-250.

One challenge for ubiquitous computing is providing appropriate tools for professional designers, thus leading to stronger user-valued applications. Unlike many previous tool-builders' attempts to support a specific technology, we take a designer-centered stance, asking the question: how do professional designers externalize ideas for off-the-desktop computing and how do these inform next generation design tools? We report on interviews with designers from various domains, including experience, interaction, industrial, and space designers. The study broadly reveals perceived challenges of moving into a non-traditional design medium, emphasizes the practice of storytelling for relating the context of interaction, and through two case studies, traces the use of various external representations during the design progression of ubicomp applications. Using paperprototyped "walkthroughs" centered on two common design representations (storyboards and physical simulations), we formed a deeper understanding of issues influencing tool development. We offer guidelines for builders of future ubicomp tools, especially early-stage conceptual tools for professional designers to prototype applications across multiple sensors, displays, and physical environments.

© All rights reserved Dow et al. and/or ACM Press

 
Edit | Del

Modahl, Martin, Agarwalla, Bikash, Saponas, T. Scott, Abowd, Gregory D. and Ramachandran, Umakishore (2006): UbiqStack: a taxonomy for a ubiquitous computing software stack. In Personal and Ubiquitous Computing, 10 (1) pp. 21-27.

 
Add publication
Show list on your website
 

Join our community and advance:

Your
Skills

Your
Network

Your
Career

 
Join our community!
 
 
 

Changes to this page (author)

05 Apr 2012: Modified
20 Apr 2011: Modified
20 Apr 2011: Modified
03 Nov 2010: Modified
03 Nov 2010: Modified
03 Nov 2010: Modified
02 Nov 2010: Modified
31 May 2009: Modified
30 May 2009: Modified
30 May 2009: Modified
09 May 2009: Modified
12 May 2008: Modified
12 May 2008: Modified
22 Jun 2007: Modified
22 Jun 2007: Added

Page Information

Page maintainer: The Editorial Team
URL: http://www.interaction-design.org/references/authors/t__scott_saponas.html

Publication statistics

Pub. period:2006-2011
Pub. count:15
Number of co-authors:24



Co-authors

Number of publications with 3 favourite co-authors:

James A. Landay:9
Desney S. Tan:4
Dan Morris:4

 

 

Productive colleagues

T. Scott Saponas's 3 most productive colleagues in number of publications:

Gregory D. Abowd:116
Ravin Balakrishnan:108
James A. Landay:91
 
 
 

Upcoming Courses

go to course
User-Centred Design - Module 3
71% booked. Starts in 24 days
 
 

Featured chapter

Marc Hassenzahl explains the fascinating concept of User Experience and Experience Design. Commentaries by Don Norman, Eric Reiss, Mark Blythe, and Whitney Hess

User Experience and Experience Design !

 
 

Our Latest Books

 
 
The Social Design of Technical Systems: Building technologies for communities. 2nd Edition
by Brian Whitworth and Adnan Ahmad
start reading
 
 
 
 
Gamification at Work: Designing Engaging Business Software
by Janaki Mythily Kumar and Mario Herger
start reading
 
 
 
 
The Social Design of Technical Systems: Building technologies for communities
by Brian Whitworth and Adnan Ahmad
start reading
 
 
 
 
The Encyclopedia of Human-Computer Interaction, 2nd Ed.
by Mads Soegaard and Rikke Friis Dam
start reading