Number of co-authors:11
Number of publications with 3 favourite co-authors:Man Huang:Larry S. Davis:Benjamin B. Bederson:
Tsung-Hsiang Chang's 3 most productive colleagues in number of publications:Benjamin B. Beders..:70Robert C. Miller:42Yang Li:30
go to course
85% booked. Starts in 10 days
go to course
User Experience: The Beginner's Guide
84% booked. Starts in 11 days
Marc Hassenzahl explains the fascinating concept of User Experience and Experience Design. Commentaries by Don Norman, Eric Reiss, Mark Blythe, and Whitney Hess
User Experience and Experience Design !
Our Latest Books
The Social Design of Technical Systems: Building technologies for communities. 2nd Edition
by Brian Whitworth and Adnan Ahmad
Gamification at Work: Designing Engaging Business Software
by Janaki Mythily Kumar and Mario Herger
The Social Design of Technical Systems: Building technologies for communities
by Brian Whitworth and Adnan Ahmad
The Encyclopedia of Human-Computer Interaction, 2nd Ed.
by Mads Soegaard and Rikke Friis Dam
Publications by Tsung-Hsiang Chang (bibliography)
Chang, Tsung-Hsiang and Li, Yang (2011): Deep shot: a framework for migrating tasks across devices using mobile phone cameras. In: Proceedings of ACM CHI 2011 Conference on Human Factors in Computing Systems 2011. pp. 2163-2172. Available online
A user task often spans multiple heterogeneous devices, e.g., working on a PC in the office and continuing the work on a laptop or a mobile phone while commuting on a shuttle. However, there is a lack of support for users to easily migrate their tasks across devices. To address this problem, we created Deep Shot, a framework for capturing the user's work state that is needed for a task (e.g., the specific part of a webpage being viewed) and resuming it on a different device. In particular, Deep Shot supports two novel and intuitive interaction techniques, deep shooting and deep posting, for pulling and pushing work states, respectively, using a mobile phone camera. In addition, Deep Shot provides a concise API for developers to leverage its services and make their application states migratable. We demonstrated that Deep Shot can be used to support a range of everyday tasks migrating across devices. An evaluation consisting of a series of experiments showed that our framework and techniques are feasible.
© All rights reserved Chang and Li and/or their publisher
Yeh, Tom, Chang, Tsung-Hsiang, Xie, Bo, Walsh, Greg, Watkins, Ivan, Wongsuphasawat, Krist, Huang, Man, Davis, Larry S. and Bederson, Benjamin B. (2011): Creating contextual help for GUIs using screenshots. In: Proceedings of the 2011 ACM Symposium on User Interface Software and Technology 2011. pp. 145-154. Available online
Contextual help is effective for learning how to use GUIs by showing instructions and highlights on the actual interface rather than in a separate viewer. However, end-users and third-party tech support typically cannot create contextual help to assist other users because it requires programming skill and source code access. We present a creation tool for contextual help that allows users to apply common computer skills-taking screenshots and writing simple scripts. We perform pixel analysis on screenshots to make this tool applicable to a wide range of applications and platforms without source code access. We evaluated the tool's usability with three groups of participants: developers, instructors, and tech support. We further validated the applicability of our tool with 60 real tasks supported by the tech support of a university campus.
© All rights reserved Yeh et al. and/or ACM Press
Chang, Tsung-Hsiang, Yeh, Tom and Miller, Rob (2011): Associating the visual representation of user interfaces with their internal structures and metadata. In: Proceedings of the 2011 ACM Symposium on User Interface Software and Technology 2011. pp. 245-256. Available online
Pixel-based methods are emerging as a new and promising way to develop new interaction techniques on top of existing user interfaces. However, in order to maintain platform independence, other available low-level information about GUI widgets, such as accessibility metadata, was neglected intentionally. In this paper, we present a hybrid framework, PAX, which associates the visual representation of user interfaces (i.e. the pixels) and their internal hierarchical metadata (i.e. the content, role, and value). We identify challenges to building such a framework. We also develop and evaluate two new algorithms for detecting text at arbitrary places on the screen, and for segmenting a text image into individual word blobs. Finally, we validate our framework in implementations of three applications. We enhance an existing pixel-based system, Sikuli Script, and preserve the readability of its script code at the same time. Further, we create two novel applications, Screen Search and Screen Copy, to demonstrate how PAX can be applied to development of desktop-level interactive systems.
© All rights reserved Chang et al. and/or ACM Press
Chang, Tsung-Hsiang (2011): Using graphical representation of user interfaces as visual references. In: Proceedings of the 2011 ACM Symposium on User Interface Software and Technology 2011. pp. 27-30. Available online
Many user interfaces use indirect references to identify specific objects and devices. My thesis investigates using graphical representations of user interfaces (i.e. screenshots) as direct visual references to support various kinds of applications. Sikuli Script enables users to programmatically control GUIs without the support from the underlying applications. Sikuli Test lets GUI developers and testers create test scripts without coding. Deep Shot introduces a framework and interaction techniques to migrate work states across heterogeneous devices in one action, taking a picture. In addition to these pure pixel-based systems, PAX associates the pixel representation with the internal structures and metadata of the user interface. Based on these building blocks, we propose to develop a visual history system that enables users to search and browse what they have seen on their computer screens. We outline some interesting use cases and discuss the challenges in this ongoing work.
© All rights reserved Chang and/or ACM Press
Chang, Tsung-Hsiang, Yeh, Tom and Miller, Robert C. (2010): GUI testing using computer vision. In: Proceedings of ACM CHI 2010 Conference on Human Factors in Computing Systems 2010. pp. 1535-1544. Available online
Testing a GUI's visual behavior typically requires human testers to interact with the GUI and to observe whether the expected results of interaction are presented. This paper presents a new approach to GUI testing using computer vision for testers to automate their tasks. Testers can write a visual test script that uses images to specify which GUI components to interact with and what visual feedback to be observed. Testers can also generate visual test scripts by demonstration. By recording both input events and screen images, it is possible to extract the images of components interacted with and the visual feedback seen by the demonstrator, and generate a visual test script automatically. We show that a variety of GUI behavior can be tested using this approach. Also, we show how this approach can facilitate good testing practices such as unit testing, regression testing, and test-driven development.
© All rights reserved Chang et al. and/or their publisher
Yeh, Tom, Chang, Tsung-Hsiang and Miller, Robert C. (2009): Sikuli: using GUI screenshots for search and automation. In: Proceedings of the ACM Symposium on User Interface Software and Technology 2009. pp. 183-192. Available online
We present Sikuli, a visual approach to search and automation of graphical user interfaces using screenshots. Sikuli allows users to take a screenshot of a GUI element (such as a toolbar button, icon, or dialog box) and query a help system using the screenshot instead of the element's name. Sikuli also provides a visual scripting API for automating GUI interactions, using screenshot patterns to direct mouse and keyboard events. We report a web-based user study showing that searching by screenshot is easy to learn and faster to specify than keywords. We also demonstrate several automation tasks suitable for visual scripting, such as map navigation and bus tracking, and show how visual scripting can improve interactive help systems previously proposed in the literature.
© All rights reserved Yeh et al. and/or their publisher
Join our community and advance:
Page maintainer: The Editorial Team