Publication statistics

Pub. period:1992-2011
Pub. count:49
Number of co-authors:87



Co-authors

Number of publications with 3 favourite co-authors:

Kent Lyons:13
Helene Brashear:7
Daniel Ashbrook:5

 

 

Productive colleagues

Thad Starner's 3 most productive colleagues in number of publications:

Gregory D. Abowd:116
Mark Billinghurst:92
Julie A. Jacko:84
 
 
 

Upcoming Courses

go to course
Emotional Design: How to make products people will love
92% booked. Starts in 3 days
go to course
UI Design Patterns for Successful Software
84% booked. Starts in 11 days
 
 

Featured chapter

Marc Hassenzahl explains the fascinating concept of User Experience and Experience Design. Commentaries by Don Norman, Eric Reiss, Mark Blythe, and Whitney Hess

User Experience and Experience Design !

 
 

Our Latest Books

 
 
The Social Design of Technical Systems: Building technologies for communities. 2nd Edition
by Brian Whitworth and Adnan Ahmad
start reading
 
 
 
 
Gamification at Work: Designing Engaging Business Software
by Janaki Mythily Kumar and Mario Herger
start reading
 
 
 
 
The Social Design of Technical Systems: Building technologies for communities
by Brian Whitworth and Adnan Ahmad
start reading
 
 
 
 
The Encyclopedia of Human-Computer Interaction, 2nd Ed.
by Mads Soegaard and Rikke Friis Dam
start reading
 
 

Thad Starner

Has also published under the name of:
"Thad E. Starner"

Personal Homepage:
cc.gatech.edu/fac/Thad.Starner/

 

Publications by Thad Starner (bibliography)

 what's this?
2011
 
Edit | Del

Weaver, Kimberly A. and Starner, Thad (2011): We need to communicate!: helping hearing parents of deaf children learn American sign language. In: Thirteenth Annual ACM SIGACCESS Conference on Assistive Technologies 2011. pp. 91-98. Available online

Language immersion from birth is crucial to a child's language development. However, language immersion can be particularly challenging for hearing parents of deaf children to provide as they may have to overcome many difficulties while learning American Sign Language (ASL). We are in the process of creating a mobile application to help hearing parents learn ASL. To this end, we have interviewed members of our target population to gain understanding of their motivations and needs when learning sign language. We found that the most common motivation for parents learning ASL is better communication with their children. Parents are most interested in acquiring more fluent sign language skills through learning to read stories to their children.

© All rights reserved Weaver and Starner and/or ACM Press

 
Edit | Del

Zafrulla, Zahoor, Brashear, Helene, Starner, Thad, Hamilton, Harley and Presti, Peter (2011): American sign language recognition with the Kinect. In: Proceedings of the 2011 International Conference on Multimodal Interfaces 2011. pp. 279-286. Available online

We investigate the potential of the Kinect depth-mapping camera for sign language recognition and verification for educational games for deaf children. We compare a prototype Kinect-based system to our current CopyCat system which uses colored gloves and embedded accelerometers to track children's hand movements. If successful, a Kinect-based approach could improve interactivity, user comfort, system robustness, system sustainability, cost, and ease of deployment. We collected a total of 1000 American Sign Language (ASL) phrases across both systems. On adult data, the Kinect system resulted in 51.5% and 76.12% sentence verification rates when the users were seated and standing respectively. These rates are comparable to the 74.82% verification rate when using the current (seated) CopyCat system. While the Kinect computer vision system requires more tuning for seated use, the results suggest that the Kinect may be a viable option for sign verification.

© All rights reserved Zafrulla et al. and/or ACM Press

 
Edit | Del

Baumann, Hannes, Starner, Thad, Iben, Hendrik, Lewandowski, Anna and Zschaler, Patrick (2011): Evaluation of graphical user-interfaces for order picking using head-mounted displays. In: Proceedings of the 2011 International Conference on Multimodal Interfaces 2011. pp. 377-384. Available online

Order picking is the process of collecting items from an assortment in inventory. It represents one of the main activities performed in warehouses and accounts for about 60% of the total operational costs of a warehouse. In previous work, we demonstrated the advantages of a head-mounted display (HMD) based picking chart over a traditional text-based pick list, a paper-based graphical pick chart, and a mobile pick-by-voice system. Here we perform two user studies that suggest that adding color cues and context sensing via a laser rangefinder improves picking accuracy with the HMD system. We also examine other variants of the pick chart, such as adding symbols, textual identifiers, images, and descriptions and their effect on accuracy, speed, and subjective usability.

© All rights reserved Baumann et al. and/or ACM Press

2010
 
Edit | Del

Lee, Seungyon Claire and Starner, Thad (2010): BuzzWear: alert perception in wearable tactile displays on the wrist. In: Proceedings of ACM CHI 2010 Conference on Human Factors in Computing Systems 2010. pp. 433-442. Available online

We present two experiments to evaluate wrist-worn wearable tactile displays (WTDs) that provide easy to perceive alerts for on-the-go users. The first experiment (2304 trials, 12 participants) focuses on the perception sensitivity of tactile patterns and reveals that people discriminate our 24 tactile patterns with up to 99% accuracy after 40 minutes of training. Among the four parameters (intensity, starting point, temporal pattern, and direction) that vary in the 24 patterns, intensity is the most difficult parameter to distinguish and temporal pattern is the easiest. The second experiment (9900 trials, 15 participants) focuses on dual task performance, exploring users' abilities to perceive three incoming alerts from two mobile devices (WTD and mobile phone) with and without visual distraction. The second experiment reveals that, when visually distracted, users' reactions to incoming alerts become slower for the mobile phone but not for the WTD.

© All rights reserved Lee and Starner and/or their publisher

 
Edit | Del

Huang, Kevin, Starner, Thad, Do, Ellen, Weiberg, Gil, Kohlsdorf, Daniel, Ahlrichs, Claas and Leibrandt, Ruediger (2010): Mobile music touch: mobile tactile stimulation for passive learning. In: Proceedings of ACM CHI 2010 Conference on Human Factors in Computing Systems 2010. pp. 791-800. Available online

Mobile Music Touch (MMT) helps teach users to play piano melodies while they perform other tasks. MMT is a lightweight, wireless haptic music instruction system consisting of fingerless gloves and a mobile Bluetooth enabled computing device, such as a mobile phone. Passages to be learned are loaded into the mobile phone and are played repeatedly while the user performs other tasks. As each note of the music plays, vibrators on each finger in the gloves activate, indicating which finger is used to play each note. We present two studies on the efficacy of MMT. The first measures 16 subjects' ability to play a passage after using MMT for 30 minutes while performing a reading comprehension test. The MMT system was significantly more effective than a control condition where the passage was played repeatedly but the subjects' fingers were not vibrated. The second study compares the amount of time required for 10 subjects to replay short, randomly generated passages using passive training versus active training. Participants with no piano experience could repeat the passages after passive training while subjects with piano experience often could not.

© All rights reserved Huang et al. and/or their publisher

 
Edit | Del

Weaver, Kimberly A., Baumann, Hannes, Starner, Thad, Iben, Hendrick and Lawo, Michael (2010): An empirical task analysis of warehouse order picking using head-mounted displays. In: Proceedings of ACM CHI 2010 Conference on Human Factors in Computing Systems 2010. pp. 1695-1704. Available online

Evaluations of task guidance systems often focus on evaluations of new technologies rather than comparing the nuances of interaction across the various systems. One common domain for task guidance systems is warehouse order picking. We present a method involving an easily reproducible ecologically motivated order picking environment for quantitative user studies designed to reveal differences in interactions. Using this environment, we perform a 12 participant within-subjects experiment demonstrating the advantages of a head-mounted display based picking chart over a traditional text-based pick list, a paper-based graphical pick chart, and a mobile pick-by-voice system. The test environment proved sufficiently sensitive, showing statistically significant results along several metrics with the head-mounted display system performing the best. We also provide a detailed analysis of the strategies adopted by our participants.

© All rights reserved Weaver et al. and/or their publisher

 
Edit | Del

Ashbrook, Daniel and Starner, Thad (2010): MAGIC: a motion gesture design tool. In: Proceedings of ACM CHI 2010 Conference on Human Factors in Computing Systems 2010. pp. 2159-2168. Available online

Devices capable of gestural interaction through motion sensing are increasingly becoming available to consumers; however, motion gesture control has yet to appear outside of game consoles. Interaction designers are frequently not expert in pattern recognition, which may be one reason for this lack of availability. Another issue is how to effectively test gestures to ensure that they are not unintentionally activated by a user's normal movements during everyday usage. We present MAGIC, a gesture design tool that addresses both of these issues, and detail the results of an evaluation.

© All rights reserved Ashbrook and Starner and/or their publisher

 
Edit | Del

Deen, James, Lee, Seungyon Claire, Li, BoHao and Starner, Thad (2010): Improving the form factor of a wrist-based mobile gesture interface. In: Proceedings of ACM CHI 2010 Conference on Human Factors in Computing Systems 2010. pp. 3679-3684. Available online

We present the form factor design iteration process of the Gesture Watch, a wearable gesture interface that utilizes non-contact hand gestures to control mobile devices while non-visual feedback is provided from its tactile display. Based on limitations discovered from a previous prototype, we identified three design challenges: wearability, mobility, and tactile perception. In addressing these challenges, we focus on three main parts affecting the form factor: the sensor housing, the strap, and the motor housing.

© All rights reserved Deen et al. and/or their publisher

 
Edit | Del

Weaver, Kimberly A., Starner, Thad and Hamilton, Harley (2010): An evaluation of video intelligibility for novice American sign language learners on a mobile device. In: Twelfth Annual ACM SIGACCESS Conference on Assistive Technologies 2010. pp. 107-114. Available online

Language immersion from birth is crucial to a child's language development. However, language immersion can be particularly challenging for hearing parents of deaf children to provide as they may have to overcome many difficulties while learning sign language. We intend to create a mobile device-based system to help hearing parents learn sign language. The first step is to understand what level of detail (i.e., resolution) is necessary for novice signers to learn from video of signs. In this paper we present the results of a study designed to evaluate the ability of novices learning sign language to ascertain the details of a particular sign based on video presented on a mobile device. Four conditions were presented. Three conditions involve manipulation of video resolution (low, medium, and high). The fourth condition employs insets showing the sign handshapes along with the high resolution video. Subjects were tested on their ability to emulate the given sign over 80 signs commonly used between parents and their young children. Although participants noticed a reduction in quality in the low resolution condition, there was no significant effect of condition on ability to generate the sign. Sign difficulty had a significant correlation with ability to correctly reproduce the sign. Although the inset handshape condition did not improve the participants' ability to emulate the signs correctly, participant feedback provided insight into situations where insets would be more useful, as well as further suggestions to improve video intelligibility. Participants were able to reproduce even the most complex signs tested with relatively high accuracy.

© All rights reserved Weaver et al. and/or their publisher

 
Edit | Del

Patel, Nirmal J., Clawson, James, Kang, Namwook, Choi, SeungEok and Starner, Thad (2010): A study of cultural effects on mobile-collocated group photo sharing. In: GROUP10 International Conference on Supporting Group Work 2010. pp. 121-130. Available online

International and intercultural collaborations provide a unique opportunity to explore cultural differences in the usage and appropriation of a technology. Mobile photo capture and sharing has been growing in popularity in the Western world but nowhere has the practice been as eagerly adopted as in South Korea. In this paper we present an evaluation of a mobile-collocated photo sharing technology probe designed to determine the ways in which photo capture and sharing can effect and enhance face-to-face interaction for pre-existing social groups. We explore the interaction of culture and automatic, real-time photo capture and sharing on groups of friends engaging in a walking tour. We assemble a multicultural research team to better understand our observations and isolate cultural and technological artifacts. We relate our findings to prior work in the area to show that culture can have as much, if not more, impact on group usage of a technology than the technical capabilities of a system.

© All rights reserved Patel et al. and/or their publisher

2009
 
Edit | Del

Lee, Seungyon Claire and Starner, Thad (2009): Mobile gesture interaction using wearable tactile displays. In: Proceedings of ACM CHI 2009 Conference on Human Factors in Computing Systems 2009. pp. 3437-3442. Available online

We present an interaction method for mobile gesture interaction using wearable tactile displays. We are attempting to show that wrist-worn tactile displays provide adequate feedback to enable reversible and error-resistant gesture-based interaction. In support of this effort, we present pilot study results demonstrating users' sensitivity in perceiving vibratory directional patterns on the wrist.

© All rights reserved Lee and Starner and/or ACM Press

 
Edit | Del

Patel, Nirmal, Clawson, James and Starner, Thad (2009): A model of two-thumb chording on a phone keypad. In: Proceedings of 11th Conference on Human-computer interaction with mobile devices and services 2009. p. 8. Available online

When designing a text entry system for mobile phone keypads, a designer needs to overcome the ambiguity that arises from mapping the 26 letters of the roman alphabet to only 12 keys (0-9, *, #). In this paper, we present a novel two-thumb chording system for text entry on a standard 12-key mobile phone keypad and introduce a performance model based on Fitts' Law for an expert user. The model provides a behavioral description of the user and predicts a text entry rate of 55.02 wpm.

© All rights reserved Patel et al. and/or their publisher

2008
 
Edit | Del

Ashbrook, Daniel L., Clawson, James R., Lyons, Kent, Starner, Thad and Patel, Nirmal (2008): Quickdraw: the impact of mobility and on-body placement on device access time. In: Proceedings of ACM CHI 2008 Conference on Human Factors in Computing Systems April 5-10, 2008. pp. 219-222. Available online

We investigate the effect of placement and user mobility on the time required to access an on-body interface. In our study, a wrist-mounted system was significantly faster to access than a device stored in the pocket or mounted on the hip. In the latter two conditions, 78% of the time it took to access the device was spent retrieving the device from its holder. As mobile devices are beginning to include peripherals (for example, Bluetooth headsets and watches connected to a mobile phone stored in the pocket), these results may help guide interface designers with respect to distributing functions across the body between peripherals.

© All rights reserved Ashbrook et al. and/or ACM Press

 
Edit | Del

Clawson, James, Lyons, Kent, Rudnick, Alex, Iannucci, Robert A. and Starner, Thad (2008): Automatic whiteout++: correcting mini-QWERTY typing errors using keypress timing. In: Proceedings of ACM CHI 2008 Conference on Human Factors in Computing Systems April 5-10, 2008. pp. 573-582. Available online

By analyzing features of users' typing, Automatic Whiteout++ detects and corrects up to 32.37% of the errors made by typists while using a mini-QWERTY (RIM Blackberry style) keyboard. The system targets "off-by-one" errors where the user accidentally presses a key adjacent to the one intended. Using a database of typing from longitudinal tests on two different keyboards in a variety of contexts, we show that the system generalizes well across users, model of keyboard, user expertise, and keyboard visibility conditions. Since a goal of Automatic Whiteout++ is to embed it in the firmware of mini-QWERTY keyboards, it does not rely on a dictionary. This feature enables the system to correct errors mid-word instead of applying a correction after the word has been typed. Though we do not use a dictionary, we do examine the effect of varying levels of language context in the system's ability to detect and correct erroneous keypresses.

© All rights reserved Clawson et al. and/or ACM Press

 
Edit | Del

Zafrulla, Zahoor, Etherton, John and Starner, Thad (2008): TTY phone: direct, equal emergency access for the deaf. In: Tenth Annual ACM SIGACCESS Conference on Assistive Technologies 2008. pp. 277-278. Available online

Seeking to enable direct and equal access for the Deaf to emergency call centers, we analyze the current state of the emergency phone system in the United States and elsewhere in the world. Leveraging teletypewriter (TTY) technology mandated by the Americans with Disabilities Act of 1990 to be installed in all emergency call centers in the United States, we developed software that emulates a TTY on a smart phone. We present an Instant Messaging style interface for mobile phones that uses the existing emergency infrastructure and allows Deaf users to communicate directly with emergency operators.

© All rights reserved Zafrulla et al. and/or ACM Press

 
Edit | Del

Ashbrook, Daniel, Lyons, Kent and Starner, Thad (2008): An investigation into round touchscreen wristwatch interaction. In: Hofte, G. Henri ter, Mulder, Ingrid and Ruyter, Boris E. R. de (eds.) Proceedings of the 10th Conference on Human-Computer Interaction with Mobile Devices and Services - Mobile HCI 2008 September 2-5, 2008, Amsterdam, the Netherlands. pp. 311-314. Available online

2007
 
Edit | Del

Clarkson, Edward, Lyons, Kent, Clawson, James and Starner, Thad (2007): Revisiting and validating a model of two-thumb text entry. In: Proceedings of ACM CHI 2007 Conference on Human Factors in Computing Systems 2007. pp. 163-166. Available online

MacKenzie and Soukoreff have previously introduced a Fitts' Law-based performance model of expert two-thumb text entry on mini-QWERTY keyboards [4]. In this work we validate the original model using results from a longitudinal study of mini-QWERTY keyboards, and update the model to account for observed inter-key time data.

© All rights reserved Clarkson et al. and/or ACM Press

 
Edit | Del

Kientz, Julie A., Hayes, Gillian R., Westeyn, Tracy L., Starner, Thad and Abowd, Gregory D. (2007): Pervasive Computing and Autism: Assisting Caregivers of Children with Special Needs. In IEEE Pervasive Computing, 6 (1) pp. 28-35. Available online

 
Edit | Del

Starner, Thad (2007): A Wearable User's Dashboard and Turn Signals. In IEEE Pervasive Computing, 6 (2) pp. 14-15. Available online

 
Edit | Del

Lyons, Kent, Brashear, Helene, Westeyn, Tracy L., Kim, Jung Soo and Starner, Thad (2007): GART: The Gesture and Activity Recognition Toolkit. In: Jacko, Julie A. (ed.) HCI International 2007 - 12th International Conference - Part III 2007. pp. 718-727. Available online

 
Edit | Del

Henderson-Summet, Valerie, Grinter, Rebecca E., Carroll, Jennie and Starner, Thad (2007): Electronic Communication: Themes from a Case Study of the Deaf Community. In: Baranauskas, Maria Ceclia Calani, Palanque, Philippe A., Abascal, Julio and Barbosa, Simone Diniz Junqueira (eds.) DEGAS 2007 - Proceedings of the 1st International Workshop on Design and Evaluation of e-Government Applications and Services September 11th, 2007, Rio de Janeiro, Brazil. pp. 347-360. Available online

2006
 
Edit | Del

Brashear, Helene, Henderson, Valerie, Park, Kwang-Hyun, Hamilton, Harley, Lee, Seungyon and Starner, Thad (2006): American sign language recognition in game development for deaf children. In: Eighth Annual ACM Conference on Assistive Technologies 2006. pp. 79-86. Available online

CopyCat is an American Sign Language (ASL) game, which uses gesture recognition technology to help young deaf children practice ASL skills. We describe a brief history of the game, an overview of recent user studies, and the results of recent work on the problem of continuous, user-independent sign language recognition in classroom settings. Our database of signing samples was collected from user studies of deaf children playing aWizard of Oz version of the game at the Atlanta Area School for the Deaf (AASD). Our data set is characterized by disfluencies inherent in continuous signing, varied user characteristics including clothing and skin tones, and illumination changes in the classroom. The dataset consisted of 541 phrase samples and 1,959 individual sign samples of five children signing game phrases from a 22 word vocabulary. Our recognition approach uses color histogram adaptation for robust hand segmentation and tracking. The children wear small colored gloves with wireless accelerometers mounted on the back of their wrists. The hand shape information is combined with accelerometer data and used to train hidden Markov models for recognition. We evaluated our approach by using leave-one-out validation; this technique iterates through each child, training on data from four children and testing on the remaining child's data. We achieved average word accuracies per child ranging from 91.75% to 73.73% for the user-independent models.

© All rights reserved Brashear et al. and/or ACM Press

 
Edit | Del

Lyons, Kent, Starner, Thad and Gane, Brian (2006): Experimental Evaluations of the Twiddler One-Handed Chording Mobile Keyboard. In Human-Computer Interaction, 21 (4) pp. 343-392. Available online

The HandyKey Twiddler is a one-handed chording mobile keyboard that employs a 3 x 4 button design, similar to that of a standard mobile telephone. We present a longitudinal study of novice users' learning rates on the Twiddler. Ten participants typed for 20 sessions using 2 different text entry methods. Each session was composed of 20 min of typing with multitap and 20 min of one-handed chording on the Twiddler. We found that users initially had a faster average typing rate with multitap; however, after 4 sessions the difference became negligible, and by the 8th session participants typed faster with chording on the Twiddler. Five participants continued our study and achieved an average rate of 47 words per minute (wpm) after approximately 25 hr of practice in varying conditions. One participant achieved an average rate of 67 wpm, equivalent to the typing rate of the 2nd author, who has been a Twiddler user for 10 years. We analyze the effects of learning on various aspects of chording, provide evidence that lack of visual feedback does not hinder expert typing speed, and examine the potential use of multicharacter chords (MCCs) to increase text entry speed. Finally, we explore improving novice user's experience with the Twiddler through the use of a chording tutorial.

© All rights reserved Lyons et al. and/or Taylor and Francis

 
Edit | Del

Vadas, Kristin, Patel, Nirmal, Lyons, Kent, Starner, Thad and Jacko, Julie A. (2006): Reading on-the-go: a comparison of audio and hand-held displays. In: Proceedings of 8th conference on Human-computer interaction with mobile devices and services 2006. pp. 219-226. Available online

In this paper we present a 20-participant controlled experiment to evaluate and compare a head-down visual display and a synthesized speech audio display for comprehending text while mobile. Participants completed reading comprehension trials while walking a path and sitting. We examine overall performance and perceived workload for four conditions: audio-walking, audio-sitting, visual-walking, and visual-sitting. Results suggest audio is an acceptable modality for mobile comprehension of text. Participants' comprehension scores for the audio-walking condition were comparable to the scores for the visual-walking condition. More importantly, participants saw improvements in their ability to navigate the environment when using the audio display.

© All rights reserved Vadas et al. and/or ACM Press

 
Edit | Del

Vadas, Kristin, Patel, Nirmal J., Lyons, Kent, Starner, Thad and Jacko, Julie A. (2006): Reading on-the-go: a comparison of audio and hand-held displays. In: Nieminen, Marko and Roykkee, Mika (eds.) Proceedings of the 8th Conference on Human-Computer Interaction with Mobile Devices and Services - Mobile HCI 2006 September 12-15, 2006, Helsinki, Finland. pp. 219-226. Available online

 
Edit | Del

Schindler, Grant, Metzger, Christian and Starner, Thad (2006): A Wearable Interface for Topological Mapping and Localization in Indoor Environments. In: Hazas, Mike, Krumm, John and Strang, Thomas (eds.) Location- and Context-Awareness - Second International Workshop - LoCA 2006 May 10-11, 2006, Dublin, Ireland. pp. 64-73. Available online

2005
 
Edit | Del

Henderson, Valerie, Lee, Seungyon, Brashear, Helene, Hamilton, Harley, Starner, Thad and Hamilton, Steven (2005): Development of an American Sign Language game for deaf children. In: Proceedings of ACM IDC05: Interaction Design and Children 2005. pp. 70-79. Available online

We present a design for an interactive American Sign Language game geared for language development for deaf children. In addition to work on game design, we show how Wizard of Oz techniques can be used to facilitate our work on ASL recognition. We report on two Wizard of Oz studies which demonstrate our technique and maximize our iterative design process. We also detail specific implications to the design raised from working with deaf children and possible solutions.

© All rights reserved Henderson et al. and/or ACM Press

 
Edit | Del

Clarkson, Edward, Clawson, James, Lyons, Kent and Starner, Thad (2005): An empirical study of typing rates on mini-QWERTY keyboards. In: Proceedings of ACM CHI 2005 Conference on Human Factors in Computing Systems 2005. pp. 1288-1291. Available online

We present a longitudinal study of mini-QWERTY keyboard use, examining the learning rates of novice mini-QWERTY users. The study consists of 20 twenty-minute typing sessions using two different-sized keyboard models. Subjects average over 31 words per minute (WPM) for the first session and increase to an average of 60 WPM by the twentieth. Individual subjects also exceed the upper bound of 60.74 WPM suggested by MacKenzie and Soukoreff's model of two-thumb text entry [5]. We discuss our results in the context of this model.

© All rights reserved Clarkson et al. and/or ACM Press

 
Edit | Del

Lee, Seungyon, Henderson, Valerie, Hamilton, Harley, Starner, Thad, Brashear, Helene and Hamilton, Steven (2005): A gesture-based american sign language game for deaf children. In: Proceedings of ACM CHI 2005 Conference on Human Factors in Computing Systems 2005. pp. 1589-1592. Available online

We present a system designed to facilitate language development in deaf children. The children interact with a computer game using American Sign Language (ASL). The system consists of three parts: an ASL (gesture) recognition engine; an interactive, game-based interface; and an evaluation system. Using interactive, user-centered design and the results of two Wizard-of-Oz studies at Atlanta Area School for the Deaf, we present some unique insights into the spatial organization of interfaces for deaf children.

© All rights reserved Lee et al. and/or ACM Press

 
Edit | Del

Lyons, Kent, Skeels, Christopher and Starner, Thad (2005): Providing support for mobile calendaring conversations: a wizard of oz evaluation of dual-purpose speech. In: Proceedings of 7th conference on Human-computer interaction with mobile devices and services 2005. pp. 243-246. Available online

We present a Wizard of Oz evaluation of dual-purpose speech, a technique designed to provide support during a face-to-face conversation by leveraging a user's conversational speech for input. With a dual-purpose speech interaction, the user's speech is meaningful in the context of a human-to-human conversation while providing useful input to a computer. For our experiment, we evaluate the ability to schedule appointments with our calendaring application, the Calendar Navigator Agent. We examine the relative difference between using speech for input compared to traditional pen input on a PDA. We found that speech is more direct and our participants can use their conversational speech for computer input. In doing so, we reduce the manual input needed to operate a PDA while engaged in a calendaring conversation.

© All rights reserved Lyons et al. and/or ACM Press

 
Edit | Del

Lyons, Kent, Skeels, Christopher and Starner, Thad (2005): Providing support for mobile calendaring conversations: a wizard of oz evaluation of dual--purpose speech. In: Tscheligi, Manfred, Bernhaupt, Regina and Mihalic, Kristijan (eds.) Proceedings of the 7th Conference on Human-Computer Interaction with Mobile Devices and Services - Mobile HCI 2005 September 19-22, 2005, Salzburg, Austria. pp. 243-246. Available online

 
Edit | Del

Paradiso, Joseph A. and Starner, Thad (2005): Energy scavenging for mobile and wireless electronics. In IEEE Pervasive Computing, 4 (1) pp. 18-27. Available online

2004
 
Edit | Del

Lyons, Kent, Starner, Thad, Plaisted, Daniel, Fusia, James, Lyons, Amanda, Drew, Aaron and Looney, E. W. (2004): Twiddler typing: one-handed chording text entry for mobile phones. In: Dykstra-Erickson, Elizabeth and Tscheligi, Manfred (eds.) Proceedings of ACM CHI 2004 Conference on Human Factors in Computing Systems April 24-29, 2004, Vienna, Austria. pp. 671-678. Available online

An experienced user of the Twiddler, a one-handed chording keyboard, averages speeds of 60 words per minute with letter-by-letter typing of standard test phrases. This fast typing rate coupled with the Twiddler's 3x4 button design, similar to that of a standard mobile telephone, makes it a potential alternative to multi-tap for text entry on mobile phones. Despite this similarity, there is very little data on the Twiddler's performance and learnability. We present a longitudinal study of novice users' learning rates on the Twiddler. Ten participants typed for 20 sessions using two different methods. Each session is composed of 20 minutes of typing with multi-tap and 20 minutes of one-handed chording on the Twiddler. We found that users initially have a faster average typing rate with multi-tap; however, after four sessions the difference becomes negligible, and by the eighth session participants type faster with chording on the Twiddler. Furthermore, after 20 sessions typing rates for the Twiddler are still increasing.

© All rights reserved Lyons et al. and/or ACM Press

 
Edit | Del

Lyons, Kent, Skeels, Christopher, Starner, Thad, Snoeck, Cornelis M., Wong, Benjamin A. and Ashbrook, Daniel (2004): Augmenting conversations using dual-purpose speech. In: Proceedings of the 2004 ACM Symposium on User Interface Software and Technology 2004. pp. 237-246. Available online

In this paper, we explore the concept of dual-purpose speech: speech that is socially appropriate in the context of a human-to-human conversation which also provides meaningful input to a computer. We motivate the use of dual-purpose speech and explore issues of privacy and technological challenges related to mobile speech recognition. We present three applications that utilize dual-purpose speech to assist a user in conversational tasks: the Calendar Navigator Agent, DialogTabs, and Speech Courier. The Calendar Navigator Agent navigates a user\'s calendar based on socially appropriate speech used while scheduling appointments. DialogTabs allows a user to postpone cognitive processing of conversational material by proving short-term capture of transient information. Finally, Speech Courier allows asynchronous delivery of relevant conversational information to a third party.

© All rights reserved Lyons et al. and/or ACM Press

 
Edit | Del

Lukowicz, Paul, Ward, Jamie A., Junker, Holger, Stger, Mathias, Troster, Gerhard, Atrash, Amin and Starner, Thad (2004): Recognizing Workshop Activity Using Body Worn Microphones and Accelerometers. In: Ferscha, Alois and Mattern, Friedemann (eds.) PERVASIVE 2004 - Pervasive Computing, Second International Conference April 21-23, 2004, Vienna, Austria. pp. 18-32. Available online

2003
 
Edit | Del

Westeyn, Tracy L., Brashear, Helene, Atrash, Amin and Starner, Thad (2003): Georgia tech gesture toolkit: supporting experiments in gesture recognition. In: Oviatt, Sharon L., Darrell, Trevor, Maybury, Mark T. and Wahlster, Wolfgang (eds.) Proceedings of the 5th International Conference on Multimodal Interfaces - ICMI 2003 November 5-7, 2003, Vancouver, British Columbia, Canada. pp. 85-92. Available online

 
Edit | Del

Ashbrook, Daniel and Starner, Thad (2003): Using GPS to learn significant locations and predict movement across multiple users. In Personal and Ubiquitous Computing, 7 (5) pp. 275-286. Available online

 
Edit | Del

Westeyn, Tracy, Brashear, Helene, Atrash, Amin and Starner, Thad (2003): Georgia tech gesture toolkit: supporting experiments in gesture recognition. In: Proceedings of the 2003 International Conference on Multimodal Interfaces 2003. pp. 85-92. Available online

Gesture recognition is becoming a more common interaction tool in the fields of ubiquitous and wearable computing. Designing a system to perform gesture recognition, however, can be a cumbersome task. Hidden Markov models (HMMs), a pattern recognition technique commonly used in speech recognition, can be used for recognizing certain classes of gestures. Existing HMM toolkits for speech recognition can be adapted to perform gesture recognition, but doing so requires significant knowledge of the speech recognition literature and its relation to gesture recognition. This paper introduces the Georgia Tech Gesture Toolkit GT{sup:2}k which leverages Cambridge University's speech recognition toolkit, HTK, to provide tools that support gesture recognition research. GT{sup:2}k provides capabilities for training models and allows for both real-time and off-line recognition. This paper presents four ongoing projects that utilize the toolkit in a variety of domains.

© All rights reserved Westeyn et al. and/or their publisher

2002
 
Edit | Del

Starner, Thad (2002): Web Technologies - Thick Clients for Personal Wireless Devices. In IEEE Computer, 35 (1) pp. 133-135. Available online

 
Edit | Del

Krum, David M., Omoteso, Olugbenga, Ribarsky, William, Starner, Thad and Hodges, Larry F. (2002): Evaluation of a Multimodal Interface for 3D Terrain Visualization. In: IEEE Visualization 2002 2002. .

2000
 
Edit | Del

Starner, Thad, Leibe, Bastian, Singletary, Brad and Pair, Jarrell (2000): MIND-WARPING: Towards Creating a Compelling Collaborative Augmented Reality Game. In: Lieberman, Henry (ed.) International Conference on Intelligent User Interfaces 2000 January 9-12, 2000, New Orleans, Louisiana, USA. pp. 256-259. Available online

Computer gaming offers a unique test-bed and market for advanced concepts in computer science, such as Human Computer Interaction (HCI), computer-supported collaborative work (CSCW), intelligent agents, graphics, and sensing technology. In addition, computer gaming is especially well-suited for explorations in the relatively young fields of wearable computing and augmented reality (AR). This paper presents a developing multi-player augmented reality game, patterned as a cross between a martial arts fighting game and an agent controller, as implemented using the Wearable Augmented Reality for Personal, Intelligent, and Networked Gaming (WARPING) system. Through interactions based on gesture, voice, and head movement input and audio and graphical output, the WARPING system demonstrates how computer vision techniques can be exploited for advanced, intelligent interfaces.

© All rights reserved Starner et al. and/or ACM Press

 
Edit | Del

Leibe, Bastian, Starner, Thad, Ribarsky, William, Wartell, Zachary, Krum, David M., Singletary, Brad and Hodges, Larry F. (2000): The Perceptive Workbench: Toward Spontaneous and Natural Interaction in Semi-immersive Virtual Environments. In: VR 2000 2000. pp. 13-20. Available online

 
Edit | Del

Leibe, Bastian, Starner, Thad, Ribarsky, William, Wartell, Zachary, Krum, David M., Weeks, Justin, Singletary, Brad and Hodges, Larry F. (2000): Toward Spontaneous Interaction with the Perceptive Workbench. In IEEE Computer Graphics and Applications, 20 (6) pp. 54-65. Available online

 
Edit | Del

Starner, Thad, Auxier, Jake, Ashbrook, Daniel and Gandy, Maribeth (2000): The gesture pendant: a self-illuminating, wearable, infrared computer vision system for home automation control and medical monitoring. In: Proceedings of the 4th IEEE International Symposium on Wearable Computers October 16-17, 2000, Atlanta, USA. pp. 87-94. Available online

In this paper we present a wearable device for control of home automation systems via hand gestures. This solution has many advantages over traditional home automation interfaces in that it can be used by those with loss of vision, motor skills, and mobility. By combining other sources of context with the pendant we can reduce the number and complexity of gestures while maintaining functionality. As users input gestures, the system can also analyze their movements for pathological tremors. This information can then be used for medical diagnosis, therapy, and emergency services.Currently, the Gesture Pendant can recognize control gestures with an accuracy of 95% and user- defined gestures with an accuracy of 97% It can detect tremors above 2HZ within +/- .1 Hz.

© All rights reserved Starner et al. and/or their publisher

 Cited in the following chapter:

Design 4 All: [/encyclopedia/design_4_all.html]


 
 Cited in the following chapter:

Design 4 All: [/encyclopedia/design_4_all.html]


 
1999
 
Edit | Del

Billinghurst, Mark and Starner, Thad (1999): Wearable Devices: New Ways to Manage Information. In IEEE Computer, 32 (1) pp. 57-64.

1997
 
Edit | Del

Starner, Thad, Weaver, Joshua and Pentland, Alex (1997): A Wearable Computer-Based American Sign Language Recogniser. In Personal and Ubiquitous Computing, 1 (3) . Available online

 
Edit | Del

Starner, Thad, Mann, Steve, Rhodes, Bradley J., Levine, Jeffrey, Healey, Jennifer, Kirsch, Dana, Picard, Rosalind W. and Pentland, Alex (1997): Augmented Reality Through Wearable Computing. In Presence: Teleoperators and Virtual Environments, 6 (4) pp. 386-398.

1992
 
Edit | Del

Friedmann, Martin, Starner, Thad and Pentland, Alex (1992): Synchronization in Virtual Realities. In Presence: Teleoperators and Virtual Environments, 1 (1) pp. 139-144.

 
Edit | Del

Friedmann, Martin, Starner, Thad and Pentland, Alex (1992): Device Synchronization Using an Optimal Linear Filter. In: SI3D 1992 1992. pp. 57-62. Available online

 
Add publication
Show list on your website
 
 

Join our community and advance:

Your
Skills

Your
Network

Your
Career

 
Join our community!
 
 
 

Page Information

Page maintainer: The Editorial Team
URL: http://www.interaction-design.org/references/authors/thad_starner.html