Adjunct Proceedings of the 2012 ACM Symposium on User Interface Software and Technology
Time and place:
UIST is the premier forum for innovations in developing human-computer interfaces. The symposium brings together user-interface researchers and practitioners with an interest in techniques, tools, and technology for constructing high-quality, innovative user interfaces.
The following articles are from "Adjunct Proceedings of the 2012 ACM Symposium on User Interface Software and Technology":
Nakagaki, Ken and Kakehi, Yasuaki (2012): Needle user interface: a sewing interface using layered conductive fabrics. In: Adjunct Proceedings of the 2012 ACM Symposium on User Interface Software and Technology 2012. pp. 1-2. Available online
Embroidery is a creative manual activity practiced by many people for a living. Such a craft demands skill and knowledge, and as it is sometimes complicated and delicate, it can be difficult for beginners to learn. We propose a system, named the Needle User Interface, which enables sewers to record and share their needlework, and receive feedback. In particular, this system can detect the position and orientation of a needle being inserted into and removed from a textile. Moreover, this system can give visual, auditory, and haptic feedback to users in real time for directing their actions appropriately. In this paper, we describe the system design, the input system, and the feedback delivery mechanism.
Lieber, Thomas and Miller, Rob (2012): Programming with everybody: tightening the copy-modify-publish feedback loop. In: Adjunct Proceedings of the 2012 ACM Symposium on User Interface Software and Technology 2012. pp. 101-102. Available online
People write more code than they ever share online. They also copy and tweak code more often than they contribute their modifications back to the public. These situations can lead to widespread duplication of effort. However, the copy-modify-publish feedback loop which could solve the problem is inhibited by the effort required to publish code online. In this paper we present our preliminary, ongoing effort to create Ditty, a programming environment that attacks the problem by sharing changes immediately, making all code public by default. Ditty tracks the changes users make to code they find and exposes the modified versions alongside the original so that commonly-used derivatives can eventually become canonical. Our work will examine mechanical and social methods to consolidate global effort on common code snippets, and the effects of designing a programming interface that inspires a feeling of the whole world programming together.
Mi, Haipeng, Ishii, Kentaro, Ma, Lei, Laokulrat, Natsuda, Inami, Masahiko and Igarashi, Takeo (2012): Pebbles: an interactive configuration tool for indoor robot navigation. In: Adjunct Proceedings of the 2012 ACM Symposium on User Interface Software and Technology 2012. pp. 11-12. Available online
This study presents an interactive configuration tool that assists non-expert users to design specific navigation route for mobile robot in an indoor environment. The user places small active markers, called pebbles, on the floor along the desired route in order to guide the robot to the destination. The active markers establish a navigation network by communicating each other with IR beacon and the robot follows the markers to reach the designated goal. During the installation, a user can get effective feedback from LED indicators and voice prompts, so that the user can immediately understand if the navigation route is appropriately configured as expected. With this tool a novice user may easily customize a mobile robot for various indoor tasks.
Takeuchi, Yuta, Katakura, Hirotaka, Kamuro, Sho, Minamizawa, Kouta and Tachi, Susumu (2012): TouchCast: an on-line platform for creation and sharing of tactile content based on tactile copy & paste. In: Adjunct Proceedings of the 2012 ACM Symposium on User Interface Software and Technology 2012. pp. 13-14. Available online
We propose TouchCast, which is an on-line platform for the creating and sharing of tactile content based on Tactile Copy&Paste. User-Generated Tactile Content refers to tactile content that is created, shared and appreciated by general Internet users. TouchCast enables users to create tactile content by applying tactile textures to existing online content (e.g., illustrations) and to share the created content over the network. Applied textures are scanned from real objects as audio signals and we call this technique Tactile Copy&Paste. In this study, we implement the system as a web browser add-on and to create User Generated Tactile Content.
Cheng, Lung-Pan, Hsiao, Fang-I, Liu, Yen-Ting and Chen, Mike Y. (2012): iRotate grasp: automatic screen rotation based on grasp of mobile devices. In: Adjunct Proceedings of the 2012 ACM Symposium on User Interface Software and Technology 2012. pp. 15-16. Available online
Automatic screen rotation improves viewing experience and usability of mobile devices, but current gravity-based approaches do not support postures such as lying on one side, and manual rotation switches require explicit user input. iRotate Grasp automatically rotates screens of mobile devices to match users' viewing orientations based on how users are grasping the devices. Our insight is that users' grasps are consistent for each orientation, but significantly differ between different orientations. Our prototype embeds a total of 32 light sensors along the four sides and the back of an iPod Touch, and uses support vector machine (SVM) to recognize grasps at 25Hz. We collected 6-users' usage under 54 different conditions: 1) grasping the device using left, right, and both hands, 2) scrolling, zooming and typing, 3) in portrait, landscape-left, and landscape-right orientations, and while 4) sitting and lying down on one side. Results show that our grasp-based approach is promising, and our iRotate Grasp prototype could correctly rotate the screen 90.5% of the time when training and testing on different users.
Arisandi, Ryan, Takami, Yusuke, Otsuki, Mai, Kimura, Asako, Shibata, Fumihisa and Tamura, Hideyuki (2012): Enjoying virtual handcrafting with ToolDevice. In: Adjunct Proceedings of the 2012 ACM Symposium on User Interface Software and Technology 2012. pp. 17-18. Available online
ToolDevice is a set of devices developed to help users in spatial work such as layout design and three-dimensional (3D) modeling. It consists of three components: TweezersDevice, Knife/HammerDevice, and BrushDevice, which use hand tool metaphors to help users recognize each device's unique functions. We have developed a mixed reality (MR) 3D modeling system that imitates real-life woodworking using the TweezersDevice and the Knife/HammerDevice. In the system, users can pick up and move virtual objects with the TweezersDevice. Users can also cut and join virtual objects using the Knife/HammerDevice. By repeating these operations, users can build virtual wood models.
Takashima, Kazuki, Fujita, Kazuyuki, Itoh, Yuichi and Kitamura, Yoshifumi (2012): Elastic scroll for multi-focus interactions. In: Adjunct Proceedings of the 2012 ACM Symposium on User Interface Software and Technology 2012. pp. 19-20. Available online
This paper proposes a novel and efficient multi-focus scroll interface that consists of a two-step operation using a con-tents distortion technique. The displayed content can be handled just like an elastic material that can be shrunk and stretched by a user's fingers. In the first operation, the user's dragging temporarily shows the results of the viewport transition of the scroll by elastically distorting the content. This operation allows the user to see both the newly obtained and the original focus on the viewport. Then, three types of simple gestures can be used to perform the second operation such as scrolling, restoring and zooming out to get the demanded focus (or foci).
Dai, Xiaowei, Gu, Jiawei, Cao, Xiang, Colgate, J. Edward and Tan, Hong (2012): SlickFeel: sliding and clicking haptic feedback on a touchscreen. In: Adjunct Proceedings of the 2012 ACM Symposium on User Interface Software and Technology 2012. pp. 21-22. Available online
We present SlickFeel, a single haptic display setup that can deliver two distinct types of feedback to a finger on a touchscreen during typical operations of sliding and clicking. Sliding feedback enables the sliding finger to feel interactive objects on a touchscreen through variations in friction. Clicking feedback provides a key-click sensation for confirming a key or button click. Two scenarios have been developed to demonstrate the utility of the two haptic effects. In the first, simple button-click scenario, a user feels the positions of four buttons on a touchscreen by sliding a finger over them and feels a simulated key-click signal by pressing on any of the buttons. In the second scenario, the advantage of haptic feedback is demonstrated in a haptically-enhanced thumb-typing scenario. A user enters text on a touchscreen with two thumbs without having to monitor the thumbs' locations on the screen. By integrating SlickFeel with a Kindle Fire tablet, we show that it can be used with existing mobile touchscreen devices.
Sra, Misha, Lee, Austin, Pao, Sheng-Ying, Jiang, Gonglue and Ishii, Hiroshii (2012): Point and share: from paper to whiteboard. In: Adjunct Proceedings of the 2012 ACM Symposium on User Interface Software and Technology 2012. pp. 23-24. Available online
Traditional writing instruments have the potential to enable new forms of interactions and collaboration though digital enhancement. This work specifically enables the user to utilize pen and paper as input mechanisms for content to be displayed on a shared interactive whiteboard. We introduce a pen cap with an infrared led, an actuator and a switch. Pointing the pen cap at the whiteboard allows users to select and position a "canvas" on the whiteboard to display handwritten text while the actuator enables resizing the canvas and the text. It is conceivable that anything one can write on paper anywhere, could be displayed on an interactive whiteboard.
Lasecki, Walter, Wesley, Rachel, Kulkarni, Anand and Bigham, Jeffrey (2012): Speaking with the crowd. In: Adjunct Proceedings of the 2012 ACM Symposium on User Interface Software and Technology 2012. pp. 25-26. Available online
Automated systems are not yet able to engage in a robust dialogue with users due the complexity and ambiguity of natural language. However, humans can easily converse with one another and maintain a shared history of past interactions. In this paper, we introduce Chorus, a system that enables real-time, two-way natural language conversation between an end user and a crowd acting as a single agent. Chorus is capable of maintaining a consistent, on-topic conversation with end users across multiple sessions, despite constituent individuals perpetually joining and leaving the crowd. This is enabled by using a curated shared dialogue history. Even though crowd members are constantly providing input, we present users with a stream of dialogue that appears to be from a single conversational partner. Experiments demonstrate that dialogue with Chorus displays elements of conversational memory and interaction consistency. Workers were able to answer 84.6% of user queries correctly, demonstrating that crowd-powered communication interfaces can serve as a robust means of interacting with software systems.
Gupta, Ankit (2012): Closing the loop between intentions and actions. In: Adjunct Proceedings of the 2012 ACM Symposium on User Interface Software and Technology 2012. pp. 27-30. Available online
In this document, I propose systems that aim to minimize the gap between intentions and the corresponding actions under different scenarios. The gap exists because of many reasons like subjective mapping between the two, lack of resources to implement the action, or inherent noise in the physical processes. The proposed system observes the action and infers the intention behind it. The system then generates a refined action using the inference. The inferred intention and the refined action are then provided as feedback to the user who can then perform corrective actions or choose the refined action as it is as the desired result. I demonstrate the design and implementation of such systems through five projects -- Image Deblurring, Tracking Block Model Assembly, Animating with Physical Proxies, What Affects Handwriting and Spying on the Writer.
Zoran, Amit and Paradiso, Joseph (2012): The FreeD: a handheld digital milling device for craft and fabrication. In: Adjunct Proceedings of the 2012 ACM Symposium on User Interface Software and Technology 2012. pp. 3-4. Available online
We present an approach to combine digital fabrication and craft that is focused on a new fabrication experience. The FreeD is a hand-held, digitally controlled, milling device. It is guided and monitored by a computer while still preserving gestural freedom. The computer intervenes only when the milling bit approaches the 3D model, which was designed beforehand, either by slowing down the spindle's speed or by drawing back the shaft. The rest of the time it allows complete freedom, allowing the user to manipulate and shape the work in any creative way. We believe The FreeD will enable a designer to move in between the straight boundaries of established CAD systems and the free expression of handcraft.
Weir, Daryl (2012): Machine learning models for uncertain interaction. In: Adjunct Proceedings of the 2012 ACM Symposium on User Interface Software and Technology 2012. pp. 31-34. Available online
As interaction methods beyond the static mouse and keyboard setup of the desktop era -- such as touch, gesture sensing, and visual tracking -- become more common, existing interaction paradigms are no longer good enough. These new modalities have high uncertainty, and conventional interfaces are not designed to reflect this. Research has shown that modelling uncertainty can improve the quality of interaction with these systems. Machine learning offers a rich set of tools to make probabilistic inferences in uncertain systems -- this is the focus of my thesis work. In particular, I'm interested in making inferences at the sensor level and propagating uncertainty forward appropriately to applications. In this paper I describe a probabilistic model for touch interaction, and discuss how I intend to use the uncertainty in this model to improve typing accuracy on a soft keyboard. The model described here lays the groundwork for a rich framework for interaction in the presence of uncertainty, incorporating data from multiple sensors to make more accurate inferences about the goals of users, and allowing systems to adapt smoothly and appropriately to their context of use.
Matulic, Fabrice (2012): Towards document engineering on pen and touch-operated interactive tabletops. In: Adjunct Proceedings of the 2012 ACM Symposium on User Interface Software and Technology 2012. pp. 35-38. Available online
Touch interfaces have now become mainstream thanks to modern smartphones and tablets. However, there are still very few "productivity" applications, i.e. tools that support mundane but essential work, especially for large interactive surfaces such as digital tabletops. This work aims to partly fill the relative void in the area of document engineering by investigating what kind of intuitive and efficient tools can be provided to support the manipulation of documents on a digital workdesk, in particular the creation and editing of documents. The fundamental interaction model relies on bimanual pen and multitouch input, which was recently introduced to tabletops and enables richer interaction possibilities. The goal is ultimately to provide useful and highly accessible UIs for document-centric applications, whose design principles will hopefully pave the way from DTP towards DTTP (Digital Tabletop Publishing).
Montague, Kyle (2012): Interactions speak louder than words: shared user models and adaptive interfaces. In: Adjunct Proceedings of the 2012 ACM Symposium on User Interface Software and Technology 2012. pp. 39-42. Available online
Touch-screens are becoming increasingly ubiquitous. They have great appeal due to their capabilities to support new forms of human interaction, including their abilities to interpret rich gestural inputs, render flexible user interfaces and enable multi-user interactions. However, the technology creates new challenges and barriers for users with limited levels of vision and motor abilities. The PhD work described in this paper proposes a technique combining Shared User Models (SUM) and adaptive interfaces to improve the accessibility of touch-screen devices for people with low levels of vision and motor ability. SUM, built from an individual's interaction data across multiple applications and devices, is used to infer new knowledge of their abilities and characteristics, without the need for continuous calibration exercises or user configurations. This approach has been realized through the development of an open source software framework to support the creation of applications that make use of SUM to adapt interfaces that match the needs of individual users.
Laviole, Jeremy and Hachet, Martin (2012): Spatial augmented reality to enhance physical artistic creation. In: Adjunct Proceedings of the 2012 ACM Symposium on User Interface Software and Technology 2012. pp. 43-46. Available online
Spatial augmented reality (SAR) promises the integration of digital information in the real (physical) world through projection. In this doctoral symposium paper, I propose different tools to improve speed or ease the drawing by projecting photos, virtual construction lines and interactive 3D scenes. After describing the tools, I explain some future challenges to explore such as the creation of tools which helps to create drawings that are "difficult" to achieve for a human being, but easy to do by a computer. Furthermore, I propose some insights for the creation of digital games and programs which can take full advantages of physical drawings.
Wu, Leslie (2012): Medical operating documents: dynamic checklists improve crisis attention. In: Adjunct Proceedings of the 2012 ACM Symposium on User Interface Software and Technology 2012. pp. 47-50. Available online
The attentional aspects of crisis computing -- supporting highly trained teams as they respond to real-life emergencies -- have been underexplored in the user interface community. My research investigates the development of interactive software systems that support crisis teams, with an eye towards intelligently managing attention. In this paper, I briefly describe MDOCS, a Medical operating DOCuments System built for time-critical interaction. MDOCS is a multi-user, multi-surface software system that implements dynamic checklists and interactive cognitive aids written to support medical crisis teams. I present the results of a study that evaluates the deployment of MDOCS in a realistic, mannequin-based medical simulator used by anesthesiologists. I propose controlled laboratory experiments that evaluate the feasibility and effectiveness of our design principles and attentional interaction techniques.
Kaye, Joseph 'Jofish' (2012): Sawtooth planar waves for haptic feedback. In: Adjunct Proceedings of the 2012 ACM Symposium on User Interface Software and Technology 2012. pp. 5-6. Available online
Current touchscreen technology does not provide adequate haptic feedback to the user. Mostly haptic feedback solutions for touchscreens involve either a) deforming the surface layers screen itself or b) placing actuators under the screen to vibrate it. This means that we have only limited control over where on the screen the feedback feels like it is coming from, and that we are limited to feedback that feels like movement up and down, orthogonal to the screen. In this work I demonstrate a novel technique for haptic feedback: sawtooth planar waves. In a series of paper Canny&Reznick showed that sawtooth planar waves could be used for object manipulation. Here that technique is applied to haptic feedback. By varying the input waves, from 1 one to 4 actuators, it is possible to provide feelings of motion in any planar direction to a finger at one point on the screen while providing a different sensation, or none at all, to fingers placed at several other points on the screen.
Kumar, Ranjitha (2012): Data-driven interactions for web design. In: Adjunct Proceedings of the 2012 ACM Symposium on User Interface Software and Technology 2012. pp. 51-54. Available online
This thesis describes how data-driven approaches to Web design problems can enable useful interactions for designers. It presents three machine learning applications which enable new interaction mechanisms for Web design: rapid retargeting between page designs, scalable design search, and generative probabilistic model induction to support design interactions cast as probabilistic inference. It also presents a scalable architecture for efficient data-mining on Web designs, which supports these three applications.
Puzis, Yury (2012): An interface agent for non-visual, accessible web automation. In: Adjunct Proceedings of the 2012 ACM Symposium on User Interface Software and Technology 2012. pp. 55-58. Available online
The Web is far less usable and accessible for the users with visual impairments than it is for the sighted people. Web automation has the potential to bridge the divide between the ways visually impaired people and sighted people access the Web, and enable visually impaired users to breeze through Web browsing tasks that beforehand were slow, hard, or even impossible to achieve. Typical automation interfaces require that the user record a macro, a useful sequence of browsing steps, so that these steps can be re-played in the future. In this paper, I present a high-level overview of an approach that enables users to find quickly relevant information on the webpage, and automate browsing without recording macros. This approach is potentially useful both for visually impaired, and sighted users.
Savage, Saiph, Forbes, Angus, Savage, Rodrigo, Höllerer, Tobias and Chávez, Norma Elva (2012): Directed social queries with transparent user models. In: Adjunct Proceedings of the 2012 ACM Symposium on User Interface Software and Technology 2012. pp. 59-60. Available online
The friend list of many social network users can be very large. This creates challenges when users seek to direct their social interactions to friends that share a particular interest. We present a self-organizing online tool that by incorporating ideas from user modeling and data visualization allows a person to quickly identify which friends best match a social query, enabling precise and efficient directed social interactions. To cover the different modalities in which our tool might be used, we introduce two different interactive visualizations. One view enables a human-in-the-loop approach for result analysis and verification, and, in a second view, location, social affiliations and "personality" data is incorporated, allowing the user to quickly consider different social and spatial factors when directing social queries. We report on a qualitative analysis, which indicates that transparency leads to an increased effectiveness of the system. This work contributes a novel method for exploring online friends.
Gooch, Daniel and Watts, Leon (2012): sleepyWhispers: sharing goodnights within distant relationships. In: Adjunct Proceedings of the 2012 ACM Symposium on User Interface Software and Technology 2012. pp. 61-62. Available online
There is a growing body of work in HCI on the design of communication technologies to help support lovers in long distance relationships. We build upon this work by presenting an exploratory study of a prototype device intended to allow distant lovers to share goodnight messages. Our work distinguishes itself by basing distance communication metaphors on elements of familiar, simple co-located behaviours. We argue that voice remains an under-utilised media when designing interactive technologies for long-distant couples. Through exploring the results of a 2-month case study we present some of the unique challenges that using voice entails.
Hamidi, Foad, Baljko, Melanie, Moakler, Alexander and Gadot, Assaf (2012): Synchrum: a tangible interface for rhythmic collaboration. In: Adjunct Proceedings of the 2012 ACM Symposium on User Interface Software and Technology 2012. pp. 63-64. Available online
Synchrum is a tangible interface, inspired by the Tibetan prayer wheel, for audience participation and collaboration during digital performance. It engages audience members in effortful interaction, where they have to rotate the device in accord with a given rotation speed. We used synchrum in a video installation and report our observations.
Han, Seungju, Kim, Jung-Bae and Kim, James D. K. (2012): Follow-me!: conducting a virtual concert. In: Adjunct Proceedings of the 2012 ACM Symposium on User Interface Software and Technology 2012. pp. 65-66. Available online
In this paper, we present a real-time continuous gesture recognition system for conducting a virtual concert. Our systems allow the user control over beat, by conducting four different beat-pattern gestures; tempo, by making faster or slower gestures; volume, by making larger or smaller gestures; and instrument emphasis, by directing the gestures towards specific areas of the orchestra on a large display. A recognition accuracy of up to 95% could be achieved for the conducting gestures (beat, tempo, and volume).
Cabral, Diogo and Correia, Nuno (2012): VideoInk: a pen-based approach for video editing. In: Adjunct Proceedings of the 2012 ACM Symposium on User Interface Software and Technology 2012. pp. 67-68. Available online
Due the growth of video sharing, its manipulation is important, however still a hard task. In order to improve it, this work proposes a pen-based approach, called VideoInk. The concept exploits the painting metaphor, replacing digital ink with video frames. The method allows the user to paint video content in a canvas, which works as a two dimensional timeline. This approach includes transition effects and zoom features based on pen pressure. A Tablet PC prototype implementing the concept was also developed.
Wongsuphasawat, Kanit, Gamburg, Alex and Moraveji, Neema (2012): You can't force calm: designing and evaluating respiratory regulating interfaces for calming technology. In: Adjunct Proceedings of the 2012 ACM Symposium on User Interface Software and Technology 2012. pp. 69-70. Available online
Interactive systems are increasingly being used to explicitly support change in the user's psychophysiological state and behavior. One trend in this vein is systems that support calm breathing habits. We designed and evaluated techniques to support respiratory regulation to reduce stress and increase parasympathetic tone. Our study revealed that auditory guidance was more effective than visual at creating self-reported calm. We attribute this to the users' ability to effectively map sound to respiration, thereby reducing cognitive load and mental exertion. Interestingly, we found that visual guidance led to more respiratory change but less subjective calm. Thus, motivating users to exert physical or mental efforts may counter the calming effects of slow breathing. Designers of calming technologies must acknowledge the discrepancy between mechanical slow breathing and experiential calm in designing future systems.
Manabe, Hiroyuki and Fukumoto, Masaaki (2012): Touch sensing by partial shadowing of PV module. In: Adjunct Proceedings of the 2012 ACM Symposium on User Interface Software and Technology 2012. pp. 7-8. Available online
A novel touch sensing technique is proposed. By utilizing partial shadowing of a photovoltaic (PV) module, touch events are accurately detected. Since the PV module also works as a power source, a battery-less touch sensing device is easily realized. We develop a wireless touch commander consisting of 6 PV modules so the user can input by using both touch and swipe actions.
Wang, Danli, Zhang, Yang, Gu, Tianyuan, He, Liang and Wang, Hongan (2012): E-Block: a tangible programming tool for children. In: Adjunct Proceedings of the 2012 ACM Symposium on User Interface Software and Technology 2012. pp. 71-72. Available online
E-Block is a tangible programming tool for children aged 5 to 9 which gives children a preliminary understanding of programming. Children can write programs to play a maze game by placing the programming blocks in E-Block. The two stages in a general programming process: programming and running are all embodied in E-Block. We realized E-Block by wireless and infrared technology and gave it feedbacks on both screen and programming blocks. The result of a preliminary user study proved that E-Block is attractive to children and easy to learn and use.
Kim, Sunjun and Lee, Geehyuk (2012): Restorable backspace. In: Adjunct Proceedings of the 2012 ACM Symposium on User Interface Software and Technology 2012. pp. 73-74. Available online
This paper presents Restorable Backspace, an input helper for mistyping correction. It stores characters deleted by backspace keystrokes, and restores them in the retyping phase. We developed Restoration algorithm that compares deleted characters and retyped characters, and makes a suggestion while retyping. In a pilot study we could observe the algorithm work as expected for most of the cases. All participants in the pilot study showed satisfaction about the concept of Restorable Backspace.
Miyauchi, Masato, Kimura, Takashi and Nojima, Takuya (2012): Development of a non-contact tongue-motion acquisition system. In: Adjunct Proceedings of the 2012 ACM Symposium on User Interface Software and Technology 2012. pp. 75-76. Available online
We present a new tongue detection system called SITA, which comprises only a Kinect device and conventional laptop computer. In contrast with other tongue-based devices, the SITA system does not require the subject to wear a device. This avoids the issue of oral hygiene and removes the risk of swallowing a device inserted in the mouth. In this paper, we introduce the SITA system and an application. To evaluate the system, a user test was conducted. The results indicate that the system could detect the tongue position in real time. Moreover, there are possibilities of training the tongue with this system.
Solovey, Erin, Jackson, Kim and Cummings, Mary (2012): Collision avoidance interface for safe piloting of unmanned vehicles using a mobile device. In: Adjunct Proceedings of the 2012 ACM Symposium on User Interface Software and Technology 2012. pp. 77-78. Available online
Autonomous robots and vehicles can perform tasks that are unsafe or undesirable for humans to do themselves, such as investigate safety in nuclear reactors or assess structural damage to a building or bridge after an earthquake. In addition, improvements in autonomous modes of such vehicles are making it easier for minimally-trained individuals to operate the vehicles. As the autonomous capabilities advance, the user's role shifts from a direct teleoperator to a supervisory control role. Since the human operator is often better suited to make decisions in uncertain situations, it is important for the human operator to have awareness of the environment in which the vehicle is operating in order to prevent collisions and damage to the vehicle as well as the structures and people in the vicinity. In this paper, we present the Collision and Obstacle Detection and Alerting (CODA) display, a novel interface to enable safe piloting of a Micro Aerial Vehicle with a mobile device in real-world settings.
Having environments that are able to adjust accordingly with the user has been sought in the last years particularly in the area of Human Computer Interfaces. Environments able to recognize the user emotions and react in consequence have been of interest on the area of Affective Computing. This work presents a project -- an adaptable 3D video game, Lost in the Dark: Emotion Adaption, which uses user's emotions as input to alter and adjust the gaming environment. To achieve this, an interface that is capable of reading brain waves, facial expressions, and head motion was used, an Emotiv® EPOC headset. For our purposes we read emotions such as meditation, excitement, and engagement into the game, altering the lighting, music, gates, colors, and other elements that would appeal to the user emotional state. With this, we achieve closing the loop of using the emotions as inputs, adjusting a system accordingly as a result, and elicit emotions.
Popov, Igor, schraefel, m.c., Hall, Wendy and Shadbolt, Nigel (2012): mashpoint: browsing the web along structured lines. In: Adjunct Proceedings of the 2012 ACM Symposium on User Interface Software and Technology 2012. pp. 81-82. Available online
Large numbers of Web sites support rich data-centric features to explore and interact with data. In this paper we present mashpoint, a framework that allows distributed data-powered Web applications to linked based on similarities of the entities in their data. By linking applications in this way we allow browsing with selections of data from one application to another application. This sort of browsing allows complex queries and exploration of data to be done by average Web users using multiple applications. We additionally use this concept to surface structured information to users in Web pages. In this paper we present this concept and our initial prototype.
Huang, Shih-Wen, Tu, Pei-Fen, Amamzadeh, Mohammad and Fu, Wai-Tat (2012): Review explorer: an innovative interface for displaying and collecting categorized review information. In: Adjunct Proceedings of the 2012 ACM Symposium on User Interface Software and Technology 2012. pp. 83-84. Available online
Review Explorer is an interface that utilizes categorized information to help users to explore a huge amount of online reviews more easily. It allows users to sort entities (e.g. restaurants, products) based on their ratings of different aspects (e.g. food for restaurants) and highlight sentences that are related to the selected aspect. Existing interfaces that summarize the aspect information in reviews suffer from the erroneous predictions made by the systems. To solve this problem, Review Explorer performs a real-time aspect sentiment analysis when a reviewer is composing a review and provides an interface for the reviewer to easily correct the errors. This novel design motivates reviewers to provide corrected aspect sentiment labels, which enables our system to provide more accurate information than existing interfaces.
Dahl, Luke and Robaszkiewicz, Sébastien (2012): For novices playing music together, adding structural constraints leads to better music and may improve user experience. In: Adjunct Proceedings of the 2012 ACM Symposium on User Interface Software and Technology 2012. pp. 85-86. Available online
We investigate the effects of adding structure to musical interactions for novices. A simple instrument allows control of three musical parameters: pitch, timbre, and note density. Two users can play at once, and their actions are visible on a public display. We asked pairs of users to perform duets under two interaction conditions: unstructured, where users are free to play what they like, and structured, where users are directed to different areas of the musical parameter space by time-varying constraints indicated on the display. A control group played two duets without structure, while an experimental group played one duet with structure and a second without. By crowd-sourcing the ranking of recorded duets we find that structure leads to musically better results. A post experiment survey showed that the experimental group had a better experience during the second unstructured duet than during the structured.
Kitani, Kris, Horita, Kodai and Koike, Hideki (2012): BallCam!: dynamic view synthesis from spinning cameras. In: Adjunct Proceedings of the 2012 ACM Symposium on User Interface Software and Technology 2012. pp. 87-88. Available online
We are interested in generating novel video sequences from a ball's point of view for sports domains. Despite the challenge of extreme camera motion, we show that we can leverage the periodicity of spinning cameras to generate a stabilized ball point-of-view video. We present preliminary results of image stabilization and view synthesis from a single camera being hurled in the air at 600 RPM.
Yamakawa, Shumpei and Nojima, Takuya (2012): A proposal for a MMG-based hand gesture recognition method. In: Adjunct Proceedings of the 2012 ACM Symposium on User Interface Software and Technology 2012. pp. 89-90. Available online
We propose a novel hand-gesture recognition method based on mechanomyograms (MMGs). Skeletal muscles generate sounds specific to their activity. By recording and analyzing these sounds, MMGs provide means to evaluate the activity. Previous research revealed that specific motions produce specific sounds enabling human motion to be classified based on MMGs. In that research, microphones and accelerometers are often used to record muscle sounds. However, environmental conditions such as noise and human motion itself easily overwhelm such sensors. In this paper, we propose to use piezoelectric-based sensing of MMGs to improve robustness from environmental conditions. The preliminary evaluation shows this method is capable of classifying several hand gestures correctly with high accuracy under certain situations.
Laviole, Jeremy and Hachet, Martin (2012): Spatial augmented reality for physical drawing. In: Adjunct Proceedings of the 2012 ACM Symposium on User Interface Software and Technology 2012. pp. 9-10. Available online
Spatial augmented reality (SAR) makes possible the projection of virtual environments into the real world. In this demo, we propose to demonstrate our SAR tools dedicated to the creation of physical drawings. From the most simple tools: the projection on virtual guidelines enabling to trace lines and curves to more advanced techniques enabling stereoscopic drawing through the projection of a 3D scene. This demo presents how we can use computer graphics tools to ease the drawing, and how it will enable new kinds of physical drawings.
Aihara, Noriyuki, Sato, Toshiki and Koike, Hideki (2012): Highly deformable interactive 3D surface display. In: Adjunct Proceedings of the 2012 ACM Symposium on User Interface Software and Technology 2012. pp. 91-92. Available online
In this research, we focused on the flexibility limitation of a display material as one of the main causes for height constraints in deformable surfaces. We propose a method that does not only utilize the material flexibility but also allows for increased variations of shapes and their corresponding interaction possibilities. Using this method, our proposed display design can then support additional expansion via protrusion of an air-pressure-controlled moldable display surface using a residual cloth-excess method and a fixed airbag mount.
Fleer, David and Leichsenring, Christian (2012): MISO: a context-sensitive multimodal interface for smart objects based on hand gestures and finger snaps. In: Adjunct Proceedings of the 2012 ACM Symposium on User Interface Software and Technology 2012. pp. 93-94. Available online
We present an unobtrusive multimodal interface for smart objects (MISO) in an everyday indoor environment. MISO uses pointing for object selection and context-sensitive arm gestures for object control. Finger snaps are used to confirm object selections and to aid with gesture segmentation. Audio feedback is provided during the interaction. The use of a Kinect depth camera allows for a compact system and robustness in varying environments and lighting conditions at low cost.
Seko, Keiichi and Fukuchi, Kentaro (2012): A guidance technique for motion tracking with a handheld camera using auditory feedback. In: Adjunct Proceedings of the 2012 ACM Symposium on User Interface Software and Technology 2012. pp. 95-96. Available online
We introduce a novel guidance technique based on auditory feedback for a handheld video camera. Tracking a moving object with a handheld camera is a difficult task, especially when the camera operator follows the target, because it is difficult to see through the viewfinder at the same time as following the target. The proposed technique provides auditory feedback via a headphone, which assists the operator to keep the target in sight. Two feedback sounds are introduced: three-dimensional (3D) audio and amplitude modulation (AM)-based sonification.
Lee, Joon Hyub, Bae, Seok-Hyung, Jung, Jinyung and Choi, Hayan (2012): Transparent display interaction without binocular parallax. In: Adjunct Proceedings of the 2012 ACM Symposium on User Interface Software and Technology 2012. pp. 97-98. Available online
Binocular parallax is a problem for any interaction system that has a transparent display and objects behind it. A proposed quantitative measure called Binocular Selectability Discriminant (BSD) allows UI designers to predict the ability of the user to perform selection task in their transparent display systems, in spite of binocular parallax. A proposed technique called Single-Distance Pseudo Transparency (SDPT) aims to eliminate binocular parallax for on-screen interactions that require precision. A mock-up study shows potentials and directions for future investigation.
Lasecki, Walter, Lau, Tessa, He, Grant and Bigham, Jeffrey (2012): Crowd-based recognition of web interaction patterns. In: Adjunct Proceedings of the 2012 ACM Symposium on User Interface Software and Technology 2012. pp. 99-100. Available online
Web automation often involves users describing complex tasks to a system, with directives generally limited to low-level constituent actions like "click the search button." This level of description is unnatural and makes it difficult to generalize the task across websites. In this paper, we propose a system for automatically recognizing higher-level interaction patterns from user's completion of tasks, such as "searching for cat videos" or "replying to a post". We present PatFinder, a system that identifies these patterns using the input of crowd workers. We validate the system by generating data for 10 tasks, having 62 crowd workers label them, and automatically extracting 14 interaction patterns. Our results show that the number of patterns grows sublinearly with the number of tasks, suggesting that a small finite set of patterns may suffice to describe the vast majority of tasks on the web.
23 Nov 2012: Added 23 Nov 2012: Added 23 Nov 2012: Added 23 Nov 2012: Added 23 Nov 2012: Added 23 Nov 2012: Added 23 Nov 2012: Added 23 Nov 2012: Added 23 Nov 2012: Added 23 Nov 2012: Added 23 Nov 2012: Added 23 Nov 2012: Added 23 Nov 2012: Added 23 Nov 2012: Added 23 Nov 2012: Added 23 Nov 2012: Added 23 Nov 2012: Added 23 Nov 2012: Added 23 Nov 2012: Added 23 Nov 2012: Added 23 Nov 2012: Added 23 Nov 2012: Added 23 Nov 2012: Added 23 Nov 2012: Added 23 Nov 2012: Added 23 Nov 2012: Added 23 Nov 2012: Added 23 Nov 2012: Added 23 Nov 2012: Added 23 Nov 2012: Added 23 Nov 2012: Added 23 Nov 2012: Added 23 Nov 2012: Added 23 Nov 2012: Added 23 Nov 2012: Added 23 Nov 2012: Added 23 Nov 2012: Added 23 Nov 2012: Added 23 Nov 2012: Added 23 Nov 2012: Added 23 Nov 2012: Added
Page maintainer: The Editorial Team
This is a tablet-optimized version of http://www.interaction-design.org/references/conferences/adjunct_proceedings_of_the_2012_acm_symposium_on_user_interface_software_and_technology.html