Publication statistics

Pub. period:1995-2012
Pub. count:66
Number of co-authors:92



Co-authors

Number of publications with 3 favourite co-authors:

Masahiko Inami:15
Daisuke Sakamoto:10
Kentaro Ishii:6

 

 

Productive colleagues

Takeo Igarashi's 3 most productive colleagues in number of publications:

Elizabeth D. Mynat..:71
W. Keith Edwards:62
Terry Winograd:59
 
 
 

Upcoming Courses

go to course
Gamification: Creating Addictive User Experience
Starts TODAY LAST CALL!
go to course
User-Centred Design - Module 3
68% booked. Starts in 27 days
 
 

Featured chapter

Marc Hassenzahl explains the fascinating concept of User Experience and Experience Design. Commentaries by Don Norman, Eric Reiss, Mark Blythe, and Whitney Hess

User Experience and Experience Design !

 
 

Our Latest Books

 
 
The Social Design of Technical Systems: Building technologies for communities. 2nd Edition
by Brian Whitworth and Adnan Ahmad
start reading
 
 
 
 
Gamification at Work: Designing Engaging Business Software
by Janaki Mythily Kumar and Mario Herger
start reading
 
 
 
 
The Social Design of Technical Systems: Building technologies for communities
by Brian Whitworth and Adnan Ahmad
start reading
 
 
 
 
The Encyclopedia of Human-Computer Interaction, 2nd Ed.
by Mads Soegaard and Rikke Friis Dam
start reading
 
 

Takeo Igarashi

Has also published under the name of:
"T. Igarashi"

Personal Homepage:
www-ui.is.s.u-tokyo.ac.jp/~takeo/

Add description
Rename / change spelling
Add publication
 

Publications by Takeo Igarashi (bibliography)

 what's this?
2012
 
Edit | Del

Young, James, Ishii, Kentaro, Igarashi, Takeo and Sharlin, Ehud (2012): Style by demonstration: teaching interactive movement style to robots. In: Proceedings of the 2012 International Conference on Intelligent User Interfaces 2012. pp. 41-50.

The style in which a robot moves, expressed through its gait or locomotion, can convey effective messages to people. For example, a robot could move aggressively in reaction to a person's actions, or alternatively react using a set of careful, submissive movements. Designing, implementing and programming robotic interfaces that react to users' actions with properly styled movements can be a difficult, daunting, and time consuming technical task. On the other hand, most people can easily perform such stylistic tasks and movements, for example, through acting them out. Following this observation, we propose to enable people to use their existing teaching skills to directly demonstrate to robots, via in-situ acting, a desired style of interaction. In this paper we present an initial style-by-demonstration (SBD) proof-of-concept of our approach, allowing people to teach a robot specific, interactive locomotion styles by providing a demonstration. We present a broomstick-robot interface for directly demonstrating locomotion style to a collocated robot, and a design critique evaluation by experienced programmers that compares our SBD approach to traditional programming methods.

© All rights reserved Young et al. and/or ACM Press

 
Edit | Del

Matsumura, Kohei, Sakamoto, Daisuke, Inami, Masahiko and Igarashi, Takeo (2012): Universal earphones: earphones with automatic side and shared use detection. In: Proceedings of the 2012 International Conference on Intelligent User Interfaces 2012. pp. 305-306.

We present universal earphones that use both a proximity sensor and a skin conductance sensor and we demonstrate several implicit interaction techniques they achieve by automatically detecting the context of use. The universal earphones have two main features. The first involves detecting the left and right sides of ears, which provides audio to either ear, and the second involves detecting the shared use of earphones and this provides mixed stereo sound to both earphones. These features not merely free users from having to check the left and right sides of earphones, but they enable them to enjoy sharing stereo audio with other people.

© All rights reserved Matsumura et al. and/or ACM Press

 
Edit | Del

Wibowo, Amy, Sakamoto, Daisuke, Mitani, Jun and Igarashi, Takeo (2012): DressUp: a 3D interface for clothing design with a physical mannequin. In: Proceedings of the 6th International Conference on Tangible and Embedded Interaction 2012. pp. 99-102.

This paper introduces DressUp, a computerized system for designing dresses with 3D input using the form of the human body as a guide. It consists of a body-sized physical mannequin, a screen, and tangible prop tools for drawing in 3D on and around the mannequin. As the user draws, he/she modifies or creates pieces of digital cloth, which are displayed on a model of the mannequin on the screen. We explore the capacity of our 3D input tools to create a variety of dresses. We also describe observations gained from users designing actual physical garments with the system.

© All rights reserved Wibowo et al. and/or ACM Press

 
Edit | Del

Lau, Manfred, Hirose, Masaki, Ohgawara, Akira, Mitani, Jun and Igarashi, Takeo (2012): Situated modeling: a shape-stamping interface with tangible primitives. In: Proceedings of the 6th International Conference on Tangible and Embedded Interaction 2012. pp. 275-282.

Existing 3D sketching methods typically allow the user to draw in empty space which is imprecise and lacks tactile feedback. We introduce a shape-stamping interface where users can model with tangible 3D primitive shapes. Each of these shapes represents a copy or a fragment of the construction material. Instead of modeling in empty space, these shapes allow us to use the real-world environment and other existing objects as a tangible guide during 3D modeling. We call this approach Situated Modeling: users can create new real-sized 3D objects directly in 3D space while using the nearby existing objects as the ultimate reference. We also describe a two-handed shape-stamping technique for stamping with tactile feedback. We show a variety of doit-yourself furniture and household products designed with our system, and perform a user study to compare our method with a related AR-based modeling system.

© All rights reserved Lau et al. and/or ACM Press

 
Edit | Del

Kato, Jun, Sakamoto, Daisuke and Igarashi, Takeo (2012): Phybots: a toolkit for making robotic things. In: Proceedings of DIS12 Designing Interactive Systems 2012. pp. 248-257.

There are many toolkits for physical UIs, but most physical UI applications are not locomotive. When the programmer wants to make things move around in the environment, he faces difficulty related to robotics. Toolkits for robot programming, unfortunately, are usually not as accessible as those for building physical UIs. To address this interdisciplinary issue, we propose Phybots, a toolkit that allows researchers and interaction designers to rapidly prototype applications with locomotive robotic things. The contributions of this research are the combination of a hardware setup, software API, its underlying architecture and a graphical runtime debug tool that supports the whole prototyping activity. This paper introduces the toolkit, applications and lessons learned from three user studies.

© All rights reserved Kato et al. and/or ACM Press

 
Edit | Del

Allen, Jeffrey, Young, James E., Sakamoto, Daisuke and Igarashi, Takeo (2012): Style by demonstration for interactive robot motion. In: Proceedings of DIS12 Designing Interactive Systems 2012. pp. 592-601.

As robots continue to enter people's everyday spaces, we argue that it will be increasingly important to consider the robots' movement style as an integral component of their interaction design. That is, aspects of the robot's movement which are not directly related to a task at hand (e.g., pick up a ball) can have a strong impact on how people perceive that action (e.g., aggressively or hesitantly). We call these elements the movement style. We believe that perceptions of this kind of style will be highly dependent on the culture, group, or individual, and so people will need to have the ability to customize their robot. Therefore, in this work we use Style by Demonstration, a style focus on the more-traditional programming by demonstration technique, and present the Puppet Dancer system, an interface for constructing paired and interactive robotic dances. In this paper we detail the Puppet Dancer interface and interaction design, explain our new algorithms for teaching dance by demonstration, and present the results from a formal qualitative study.

© All rights reserved Allen et al. and/or ACM Press

 
Edit | Del

Ribeiro, Andre and Igarashi, Takeo (2012): Sketch-editing games: human-machine communication, game theory and applications. In: Proceedings of the 2012 ACM Symposium on User Interface Software and Technology 2012. pp. 287-298.

We study uncertainty in graphical-based interaction (with special attention to sketches). We argue that a comprehensive model for the problem must include the interaction participants (and their current beliefs), their possible actions and their past sketches. It's yet unclear how to frame and solve the former problem, considering all the latter elements. We suggest framing the problem as a game and solving it with a game-theoretical solution, which leads to a framework for the design of new two-way, sketch-based user interfaces. In special, we use the framework to design a game that can progressively learn visual models of objects from user sketches, and use the models in real-world interactions. Instead of an abstract visual criterion, players in this game learn models to optimize interaction (the game's duration). This two-way sketching game addresses problems essential in emerging interfaces (such as learning and how to deal with interpretation errors). We review possible applications in robotic sketch-to-command, hand gesture recognition, media authoring and visual search, and evaluate two. Evaluations demonstrate how players improve performance with repeated play, and the influence of interaction aspects on learning.

© All rights reserved Ribeiro and Igarashi and/or ACM Press

 
Edit | Del

Sugiura, Yuta, Inami, Masahiko and Igarashi, Takeo (2012): A thin stretchable interface for tangential force measurement. In: Proceedings of the 2012 ACM Symposium on User Interface Software and Technology 2012. pp. 529-536.

We have developed a simple skin-like user interface that can be easily attached to curved as well as flat surfaces and used to measure tangential force generated by pinching and dragging interactions. The interface consists of several photoreflectors that consist of an IR LED and a phototransistor and elastic fabric such as stocking and rubber membrane. The sensing method used is based on our observation that photoreflectors can be used to measure the ratio of expansion and contraction of a stocking using the changes in transmissivity of IR light passing through the stocking. Since a stocking is thin, stretchable, and nearly transparent, it can be easily attached to various types of objects such as mobile devices, robots, and different parts of the body as well as to various types of conventional pressure sensors without altering the original shape of the object. It can also present natural haptic feedback in accordance with the amount of force exerted. A system using several such sensors can determine the direction of a two-dimensional force. A variety of example applications illustrated the utility of this sensing system.

© All rights reserved Sugiura et al. and/or ACM Press

 
Edit | Del

Mi, Haipeng, Ishii, Kentaro, Ma, Lei, Laokulrat, Natsuda, Inami, Masahiko and Igarashi, Takeo (2012): Pebbles: an interactive configuration tool for indoor robot navigation. In: Adjunct Proceedings of the 2012 ACM Symposium on User Interface Software and Technology 2012. pp. 11-12.

This study presents an interactive configuration tool that assists non-expert users to design specific navigation route for mobile robot in an indoor environment. The user places small active markers, called pebbles, on the floor along the desired route in order to guide the robot to the destination. The active markers establish a navigation network by communicating each other with IR beacon and the robot follows the markers to reach the designated goal. During the installation, a user can get effective feedback from LED indicators and voice prompts, so that the user can immediately understand if the navigation route is appropriately configured as expected. With this tool a novice user may easily customize a mobile robot for various indoor tasks.

© All rights reserved Mi et al. and/or ACM Press

2011
 
Edit | Del

Hashimoto, Sunao, Ostanin, Andrei, Inami, Masahiko and Igarashi, Takeo (2011): Snappy: snapshot-based robot interaction for arranging objects. In: Proceedings of the 6th International Conference on Human Robot Interaction 2011. pp. 411-412.

Photograph is a very useful tool for describing configurations of real-world objects to others. People immediately understand various pieces of information such as "what is the target object" and "where is the target position" by looking at a photograph, even without verbal descriptions. Our goal was to leverage these features of photographs to enrich human-robot interactions. We propose to use photographs as a front-end between a human and a home robot system. We named this method "Snappy". The user takes a photo to remember the target in a real-world situation involving a task and shows it to the system to make it physically execute the task. We developed a prototype system in which the user took a photo of a dish layout on a table and showed it to the system later to then have robots deliver and arrange the dishes in the same way.

© All rights reserved Hashimoto et al. and/or their publisher

 
Edit | Del

Yoshizaki, Wataru, Sugiura, Yuta, Chiou, Albert C., Hashimoto, Sunao, Inami, Masahiko, Igarashi, Takeo, Akazawa, Yoshiaki, Kawachi, Katsuaki, Kagami, Satoshi and Mochimaru, Masaaki (2011): An actuated physical puppet as an input device for controlling a digital manikin. In: Proceedings of ACM CHI 2011 Conference on Human Factors in Computing Systems 2011. pp. 637-646.

We present an actuated handheld puppet system for controlling the posture of a virtual character. Physical puppet devices have been used in the past to intuitively control character posture. In our research, an actuator is added to each joint of such an input device to provide physical feedback to the user. This enhancement offers many benefits. First, the user can upload pre-defined postures to the device to save time. Second, the system is capable of dynamically adjusting joint stiffness to counteract gravity, while allowing control to be maintained with relatively little force. Third, the system supports natural human body behaviors, such as whole-body reaching and joint coupling. This paper describes the user interface and implementation of the proposed technique and reports the results of expert evaluation. We also conducted two user studies to evaluate the effectiveness of our method.

© All rights reserved Yoshizaki et al. and/or their publisher

 
Edit | Del

Liu, Kexi, Sakamoto, Daisuke, Inami, Masahiko and Igarashi, Takeo (2011): Roboshop: multi-layered sketching interface for robot housework assignment and management. In: Proceedings of ACM CHI 2011 Conference on Human Factors in Computing Systems 2011. pp. 647-656.

As various home robots come into homes, the need for efficient robot task management tools is arising. Current tools are designed for controlling individual robots independently, so they are not ideally suitable for assigning coordinated action among multiple robots. To address this problem, we developed a management tool for home robots with a graphical editing interface. The user assigns instructions by selecting a tool from a toolbox and sketching on a bird's-eye view of the environment. Layering supports the management of multiple tasks in the same room. Layered graphical representation gives a quick overview of and access to rich information tied to the physical environment. This paper describes the prototype system and reports on our evaluation of the system.

© All rights reserved Liu et al. and/or their publisher

 
Edit | Del

Sugiura, Yuta, Kakehi, Gota, Withana, Anusha, Lee, Calista, Sakamoto, Daisuke, Sugimoto, Maki, Inami, Masahiko and Igarashi, Takeo (2011): Detecting shape deformation of soft objects using directional photoreflectivity measurement. In: Proceedings of the 2011 ACM Symposium on User Interface Software and Technology 2011. pp. 509-516.

We present the FuwaFuwa sensor module, a round, hand-size, wireless device for measuring the shape deformations of soft objects such as cushions and plush toys. It can be embedded in typical soft objects in the household without complex installation procedures and without spoiling the softness of the object because it requires no physical connection. Six LEDs in the module emit IR light in six orthogonal directions, and six corresponding photosensors measure the reflected light energy. One can easily convert almost any soft object into a touch-input device that can detect both touch position and surface displacement by embedding multiple FuwaFuwa sensor modules in the object. A variety of example applications illustrate the utility of the FuwaFuwa sensor module. An evaluation of the proposed deformation measurement technique confirms its effectiveness.

© All rights reserved Sugiura et al. and/or ACM Press

2010
 
Edit | Del

Watanabe, Keita, Matsuda, Shota, Yasumura, Michiaki, Inami, Masahiko and Igarashi, Takeo (2010): CastOven: a microwave oven with just-in-time video clips. In: Proceedings of the 2010 International Conference on Uniquitous Computing 2010. pp. 385-386.

In this paper, we propose a novel microwave oven called CastOven. CastOven is a microwave oven with a LCD display that enables people to enjoy videos while they are waiting for the completion of cooking. Current media contents force us to adjust our schedules to enjoy them. Media contents, especially movies, take specific time durations to watch them, but it is not easy to squeeze in time to do so in daily life. The system identifies the idle time in daily life and delivers an appropriate amount of media content to the user to enjoy during their idle time.

© All rights reserved Watanabe et al. and/or their publisher

 
Edit | Del

Sugiura, Yuta, Sakamoto, Diasuke, Withana, Anusha, Inami, Masahiko and Igarashi, Takeo (2010): Cooking with robots: designing a household system working in open environments. In: Proceedings of ACM CHI 2010 Conference on Human Factors in Computing Systems 2010. pp. 2427-2430.

We propose a cooking system that operates in an open environment. The system cooks a meal by pouring various ingredients into a boiling pot on an induction heating cooker and adjusts the heating strength according to the user's instructions. We then describe how the system incorporates robotic- and human-specific elements in a shared workspace so as to achieve a cooperative rudimentary cooking capability. First, we use small mobile robots instead of built-in arms to save space, improve flexibility and increase safety. Second, we use detachable visual markers to allow the user to easily configure the real-world environment. Third, we provide a graphical user interface to display detailed cooking instructions to the user. We hope insights obtained in this experiment will be useful for the design of other household systems in the future.

© All rights reserved Sugiura et al. and/or their publisher

 
Edit | Del

Ishii, Kentaro, Ishida, Akihiko, Saul, Greg, Inami, Masahiko and Igarashi, Takeo (2010): Active navigation landmarks for a service robot in a home environment. In: Proceedings of the 5th ACM/IEEE International Conference on Human Robot Interaction 2010. pp. 99-100.

This paper proposes a physical user interface for a user to teach a robot to navigate a home environment. The user places small devices containing infrared based communication functionality as landmarks in the environment. The robot follows these landmarks to navigate to a goal landmark. Active landmarks communicate with each other to map their spatial relationships. Our method allows the user to start using the system immediately after placing the landmarks without installing any global position sensing system or prior mapping by the robot.

© All rights reserved Ishii et al. and/or their publisher

 
Edit | Del

Young, James E., Ishii, Kentaro, Igarashi, Takeo and Sharlin, Ehud (2010): Showing robots how to follow people using a broomstick interface. In: Proceedings of the 5th ACM/IEEE International Conference on Human Robot Interaction 2010. pp. 133-134.

Robots are poised to enter our everyday environments such as our homes and offices, contexts that present unique questions such as the style of the robot's actions. Style-oriented characteristics are difficult to define programmatically, a problem that is particularly prominent for a robot's interactive behaviors, those that must react accordingly to dynamic actions of people. In this paper, we present a technique for programming the style of how a robot should follow a person by demonstration, such that non-technical designers and users can directly create the style of following using their existing skill sets. We envision that simple physical interfaces like ours can be used by non-technical people to design the style of a wide range of robotic behaviors.

© All rights reserved Young et al. and/or their publisher

 
Edit | Del

Hashimoto, Sunao, Ostanin, Andrei, Inami, Masahiko and Igarashi, Takeo (2010): Photograph-based interaction for teaching object delivery tasks to robots. In: Proceedings of the 5th ACM/IEEE International Conference on Human Robot Interaction 2010. pp. 153-154.

Personal photographs are important media for communication in our daily lives. People take photos to remember things about themselves and show them to others to share the experience. We expect that a photograph can be useful tool for teaching a task to a robot. We propose a novel human-robot interaction using photographs. The user takes a photo to remember the target in a real-world situation involving a task and shows it to the system to make it physically execute the task. We developed a prototype system in which the user took a photo of a dish arrangement on a table and showed it to the system later to then have a small robot deliver and arrange the dishes in the same way.

© All rights reserved Hashimoto et al. and/or their publisher

 
Edit | Del

Kato, Jun, Sakamoto, Daisuke and Igarashi, Takeo (2010): Surfboard: keyboard with microphone as a low-cost interactive surface. In: Proceedings of the 2010 ACM Symposium on User Interface Software and Technology 2010. pp. 387-388.

We introduce a technique to detect simple gestures of "surfing" (moving a hand horizontally) on a standard keyboard by analyzing recorded sounds in real-time with a microphone attached close to the keyboard. This technique allows the user to maintain a focus on the screen while surfing on the keyboard. Since this technique uses a standard keyboard without any modification, the user can take full advantage of the input functionality and tactile quality of his favorite keyboard supplemented with our interface.

© All rights reserved Kato et al. and/or their publisher

 
Edit | Del

Mistry, Pranav, Ishii, Kentaro, Inami, Masahiko and Igarashi, Takeo (2010): Blinkbot: look at, blink and move. In: Proceedings of the 2010 ACM Symposium on User Interface Software and Technology 2010. pp. 397-398.

In this paper we present BlinkBot -- a hands free input interface to control and command a robot. BlinkBot explores the natural modality of gaze and blink to direct a robot to move an object from a location to another. The paper also explains detailed hardware and software implementation of the prototype system.

© All rights reserved Mistry et al. and/or their publisher

 
Edit | Del

Shirokura, Takumi, Sakamoto, Daisuke, Sugiura, Yuta, Ono, Tetsuo, Inami, Masahiko and Igarashi, Takeo (2010): RoboJockey: real-time, simultaneous, and continuous creation of robot actions for everyone. In: Proceedings of the 2010 ACM Symposium on User Interface Software and Technology 2010. pp. 399-400.

We developed a RoboJockey (Robot Jockey) interface for coordinating robot actions, such as dancing -- similar to "Disc jockey" and "Video jockey". The system enables a user to choreograph a dance for a robot to perform by using a simple visual language. Users can coordinate humanoid robot actions with a combination of arm and leg movements. Every action is automatically performed to background music and beat. The RoboJockey will give a new entertainment experience with robots to the end-users.

© All rights reserved Shirokura et al. and/or their publisher

 
Edit | Del

Igarashi, Yuki and Igarashi, Takeo (2010): Holly: A Drawing Editor for Designing Stencils. In IEEE Computer Graphics and Applications, 30 (4) pp. 8-14.

2009
 
Edit | Del

Zhao, Shengdong, Nakamura, Koichi, Ishii, Kentaro and Igarashi, Takeo (2009): Magic cards: a paper tag interface for implicit robot control. In: Proceedings of ACM CHI 2009 Conference on Human Factors in Computing Systems 2009. pp. 173-182.

Typical Human Robot Interaction (HRI) assumes that the user explicitly interacts with robots. However, explicit control with robots can be unnecessary or even undesirable in certain cases, such as dealing with domestic services (or housework). In this paper, we propose an alternative strategy of interaction: the user implicitly controls a robot by issuing commands on corresponding real world objects and the environment. Robots then discover these commands and complete them in the background. We implemented a paper-tag-based interface to support such implicit robot control in a sensor-augmented home environment. Our initial user studies indicated that the paper-tag-based interface is particularly simple to use and provides users with flexibility in planning and controlling their housework tasks in a simulated home environment.

© All rights reserved Zhao et al. and/or ACM Press

 
Edit | Del

Sakamoto, Daisuke, Honda, Koichiro, Inami, Masahiko and Igarashi, Takeo (2009): Sketch and run: a stroke-based interface for home robots. In: Proceedings of ACM CHI 2009 Conference on Human Factors in Computing Systems 2009. pp. 197-200.

Numerous robots have been developed, and some of them are already being used in homes, institutions, and workplaces. Despite the development of useful robot functions, the focus so far has not been on user interfaces of robots. General users of robots find it hard to understand what the robots are doing and what kind of work they can do. This paper presents an interface for the commanding home robots by using stroke gestures on a computer screen. This interface allows the user to control robots and design their behaviors by sketching the robot's behaviors and actions on a top-down view from ceiling cameras. To convey a feeling of directly controlling the robots, our interface employs the live camera view. In this study, we focused on a house-cleaning task that is typical of home robots, and developed a sketch interface for designing behaviors of vacuuming robots.

© All rights reserved Sakamoto et al. and/or ACM Press

 
Edit | Del

Kato, Jun, Sakamoto, Daisuke, Inami, Masahiko and Igarashi, Takeo (2009): Multi-touch interface for controlling multiple mobile robots. In: Proceedings of ACM CHI 2009 Conference on Human Factors in Computing Systems 2009. pp. 3443-3448.

We must give some form of a command to robots in order to have the robots do a complex task. An initial instruction is required even if they do their tasks autonomously. We therefore need interfaces for the operation and teaching of robots. Natural languages, joysticks, and other pointing devices are currently used for this purpose. These interfaces, however, have difficulty in operating multiple robots simultaneously. We developed a multi-touch interface with a top-down view from a ceiling camera for controlling multiple mobile robots. The user specifies a vector field followed by all robots on the view. This paper describes the user interface and its implementation, and future work of the project.

© All rights reserved Kato et al. and/or ACM Press

 
Edit | Del

Willis, Karl D. D., Lin, Juncong, Mitani, Jun and Igarashi, Takeo (2009): Spatial sketch: bridging between movement & fabrication. In: Proceedings of the 4th International Conference on Tangible and Embedded Interaction 2009. pp. 5-12.

Spatial Sketch is a three-dimensional (3D) sketch application that bridges between physical movement and the fabrication of objects in the real world via cut planar materials. This paper explores the rationale and details behind the development of the Spatial Sketch application, and presents our observations from user testing and a hands-on lamp shade design workshop. Finally we reflect upon the relevance of embodied forms of human computer interaction for use in digital fabrication.

© All rights reserved Willis et al. and/or their publisher

 
Edit | Del

Todo, Hideki, Anjyo, Ken and Igarashi, Takeo (2009): Stylized lighting for cartoon shader. In Journal of Visualization and Computer Animation, 20 (2) pp. 143-152.

2008
 
Edit | Del

Kobayashi, Masatomo and Igarashi, Takeo (2008): Ninja cursors: using multiple cursors to assist target acquisition on large screens. In: Proceedings of ACM CHI 2008 Conference on Human Factors in Computing Systems April 5-10, 2008. pp. 949-958.

We propose the "ninja cursor" to improve the performance of target acquisition, particularly on large screens. This technique uses multiple distributed cursors to reduce the average distance to targets. Each cursor moves synchronously following mouse movement. We present the design and implementation of the proposed technique, including a method to resolve the ambiguity that results when multiple cursors indicate different targets simultaneously. We also conducted an experiment to assess the performance of the ninja cursor. The results indicate that it can generally reduce movement time. However, the performance is greatly affected by the number of cursors and target density. Based on these results, we discuss how our technique can be put into practical use. In addition to presenting a novel method to improve pointing performance, our study is the first to explore a variable number of cursors for performing pointing tasks.

© All rights reserved Kobayashi and Igarashi and/or ACM Press

 
Edit | Del

Ijiri, Takashi, Yokoo, Mihoshi, Kawabata, Saneyuki and Igarashi, Takeo (2008): Surface-based Growth Simulation for Opening Flowers. In: Proceedings of the 2008 Conference on Graphics Interface May 28-30, 2008, Windsor, Ontario, Canada. pp. 227-234.

We propose a biologically motivated method for creating animations of opening flowers. We simulate the development of petals based on the observation that flower opening is mainly caused by cell expansion. We use an elastic triangular mesh to represent a petal and emulate its growth by developing each triangular region. Our simulation process consists of two steps. The system first grows each triangle independently according to user-specified parameters and derives target rest edge lengths and dihedral angles. The system then updates the global shape to satisfy the rest lengths and dihedral angles as much as possible by means of energy minimization. We repeat these two processes to obtain keyframes of the flower opening animation. Our system can generate an animation in about 11.5 minutes. Applications include the creation of graphics animations, designing 3D plant models, and simulation for aiding biological study. In contrast to existing systems that simulate the development of flattened 2D petals, our system simulates the growth of petals as 3D surfaces. We show the feasibility of our method by creating animations of Asiatic lily and Eustoma grandiflorum.

© All rights reserved Ijiri et al. and/or their publisher

 
Edit | Del

Nakamura, Toshio and Igarashi, Takeo (2008): An application-independent system for visualizing user operation history. In: Cousins, Steve B. and Beaudouin-Lafon, Michel (eds.) Proceedings of the 21st Annual ACM Symposium on User Interface Software and Technology October 19-22, 2008, Monterey, CA, USA. pp. 23-32.

 
Edit | Del

Ijiri, Takashi, Mech, Radomr, Igarashi, Takeo and Miller, Gavin (2008): An Example-based Procedural System for Element Arrangement. In Comput. Graph. Forum, 27 (2) pp. 429-436.

 
Edit | Del

Igarashi, Yuki, Igarashi, Takeo and Suzuki, Hiromasa (2008): Knitting a 3D Model. In Comput. Graph. Forum, 27 (7) pp. 1737-1743.

2007
 
Edit | Del

Shin, HyoJong and Igarashi, Takeo (2007): Magic canvas: interactive design of a 3-D scene prototype from freehand sketches. In: Proceedings of the 2007 Conference on Graphics Interface 2007. pp. 63-70.

Construction of a 3-D scene consisting of multiple objects can be tedious work. Existing 3-D editing tools require the user to choose an appropriate model in a database first and then carefully place it in the scene at a desired position combining various operations such as translation, rotation, and scaling. To simplify the process, we propose a system that takes simple 2D sketches of models in a scene as input for 3D scene construction. The system then automatically identifies corresponding models in a database and puts them in the appropriate location and posture so that their appearance matches the user's input sketches. The system combines a 3-D model search and a 3-D posture estimation to obtain the result. This system allows the user to construct a prototype of a 3-D scene quickly and intuitively. We conducted a user study to compare our interface with traditional menu-based UI and verified that our system was useful for constructing a 3-D scene prototype, especially for facilitating the exploration of various alternative designs. We expect our system to be useful as a prototyping tool for 3-D scene construction in various application areas such as interior design, communication, education, and entertainment.

© All rights reserved Shin and Igarashi and/or Canadian Information Processing Society

 
Edit | Del

Maynes-Aminzade, Dan, Winograd, Terry and Igarashi, Takeo (2007): Eyepatch: prototyping camera-based interaction through examples. In: Proceedings of the ACM Symposium on User Interface Software and Technology October 7-10, 2007, Newport, Rhode Island, USA. pp. 33-42.

Cameras are a useful source of input for many interactive applications, but computer vision programming is difficult and requires specialized knowledge that is out of reach for many HCI practitioners. In an effort to learn what makes a useful computer vision design tool, we created Eyepatch, a tool for designing camera-based interactions, and evaluated the Eyepatch prototype through deployment to students in an HCI course. This paper describes the lessons we learned about making computer vision more accessible, while retaining enough power and flexibility to be useful in a wide variety of interaction scenarios.

© All rights reserved Maynes-Aminzade et al. and/or ACM Press

 
Edit | Del

Watanabe, Nayuko, Washida, Motoi and Igarashi, Takeo (2007): Bubble clusters: an interface for manipulating spatial aggregation of graphical objects. In: Proceedings of the ACM Symposium on User Interface Software and Technology October 7-10, 2007, Newport, Rhode Island, USA. pp. 173-182.

Spatial layout is frequently used for managing loosely organized information, such as desktop icons and digital ink. To help users organize this type of information efficiently, we propose an interface for manipulating spatial aggregations of objects. The aggregated objects are automatically recognized as a group, and the group structure is visualized as a two-dimensional bubble surface that surrounds the objects. Users can drag, copy, or delete a group by operating on the bubble. Furthermore, to help pick out individual objects in a dense aggregation, the system spreads the objects to avoid overlapping when requested. This paper describes the design of this interface and its implementation. We tested our technique in icon grouping and ink relocation tasks and observed improvements in user performance.

© All rights reserved Watanabe et al. and/or ACM Press

 
Edit | Del

Kobayashi, Masatomo and Igarashi, Takeo (2007): Boomerang: suspendable drag-and-drop interactions based on a throw-and-catch metaphor. In: Proceedings of the ACM Symposium on User Interface Software and Technology October 7-10, 2007, Newport, Rhode Island, USA. pp. 187-190.

We present the boomerang technique, which makes it possible to suspend and resume drag-and-drop operations. A throwing gesture while dragging an object suspends the operation, anytime and anywhere. A drag-and-drop interaction, enhanced with our technique, allows users to switch windows, invoke commands, and even drag other objects during a drag-and-drop operation without using the keyboard or menus. We explain how a throwing gesture can suspend drag-and-drop operations, and describe other features of our technique, including grouping, copying, and deleting dragged objects. We conclude by presenting prototype implementations and initial feedback on the proposed technique.

© All rights reserved Kobayashi and Igarashi and/or ACM Press

 
Edit | Del

Kurihara, Kazutaka, Goto, Masataka, Ogata, Jun, Matsusaka, Yosuke and Igarashi, Takeo (2007): Presentation sensei: a presentation training system using speech and image processing. In: Massaro, Dominic W., Takeda, Kazuya, Roy, Deb and Potamianos, Alexandros (eds.) Proceedings of the 9th International Conference on Multimodal Interfaces - ICMI 2007 November 12-15, 2007, Nagoya, Aichi, Japan. pp. 358-365.

 
Edit | Del

Nishida, Takeshi and Igarashi, Takeo (2007): Bringing Round-Robin Signature to Computer-Mediated Communication. In: Proceedings of the Tenth European Conference on Computer-Supported Cooperative Work 2007. pp. 219-230.

In computer-mediated group communication, anonymity enables participants to post controversial comments without risking accusations of improper behavior. While this may encourage more open and frank discussion, it diminishes accountability. In addition, anonymous comments are perceived as weaker than non-anonymous comments. We propose a communication protocol that allows a user to send a strong message to the group without having to assume sole individual responsibility. The system posts an anonymous comment, and then calls for supporters. When sufficient numbers of supporters have been gathered, the system reveals the names of all supporters as a round-robin signature. This prevents the originator from being identified. We describe the implementation of this protocol in a text-based chat system, and report our experience operating it at two technical conferences.

© All rights reserved Nishida and Igarashi and/or Springer

 
Edit | Del

Kurihara, Kazutaka and Igarashi, Takeo (2007): A Flexible Presentation Tool for Diverse Multi-display Environments. In: Baranauskas, Maria Ceclia Calani, Palanque, Philippe A., Abascal, Julio and Barbosa, Simone Diniz Junqueira (eds.) DEGAS 2007 - Proceedings of the 1st International Workshop on Design and Evaluation of e-Government Applications and Services September 11th, 2007, Rio de Janeiro, Brazil. pp. 430-433.

 
Edit | Del

Nishida, Takeshi and Igarashi, Takeo (2007): Drag-and-Guess: Drag-and-Drop with Prediction. In: Baranauskas, Maria Ceclia Calani, Palanque, Philippe A., Abascal, Julio and Barbosa, Simone Diniz Junqueira (eds.) DEGAS 2007 - Proceedings of the 1st International Workshop on Design and Evaluation of e-Government Applications and Services September 11th, 2007, Rio de Janeiro, Brazil. pp. 461-474.

 
Edit | Del

Kurihara, Kazutaka, Goto, Masataka, Ogata, Jun, Matsusaka, Yosuke and Igarashi, Takeo (2007): Presentation sensei: a presentation training system using speech and image processing. In: Proceedings of the 2007 International Conference on Multimodal Interfaces 2007. pp. 358-365.

In this paper we present a presentation training system that observes a presentation rehearsal and provides the speaker with recommendations for improving the delivery of the presentation, such as to speak more slowly and to look at the audience. Our system "Presentation Sensei" is equipped with a microphone and camera to analyze a presentation by combining speech and image processing techniques. Based on the results of the analysis, the system gives the speaker instant feedback with respect to the speaking rate, eye contact with the audience, and timing. It also alerts the speaker when some of these indices exceed predefined warning thresholds. After the presentation, the system generates visual summaries of the analysis results for the speaker's self-examinations. Our goal is not to improve the content on a semantic level, but to improve the delivery of it by reducing inappropriate basic behavior patterns. We asked a few test users to try the system and they found it very useful for improving their presentations. We also compared the system's output with the observations of a human evaluator. The result shows that the system successfully detected some inappropriate behavior. The contribution of this work is to introduce a practical recognition-based human training system and to show its feasibility despite the limitations of state-of-the-art speech and video recognition technologies.

© All rights reserved Kurihara et al. and/or their publisher

 
Edit | Del

Igarashi, Takeo and Zeleznik, Robert C. (2007): Guest Editors' Introduction: Sketch-Based Interaction. In IEEE Computer Graphics and Applications, 27 (1) pp. 26-27.

2006
 
Edit | Del

Kurihara, Kazutaka, Goto, Masataka, Ogata, Jun and Igarashi, Takeo (2006): Speech pen: predictive handwriting based on ambient multimodal recognition. In: Proceedings of ACM CHI 2006 Conference on Human Factors in Computing Systems 2006. pp. 851-860.

It is tedious to handwrite long passages of text by hand. To make this process more efficient, we propose predictive handwriting that provides input predictions when the user writes by hand. A predictive handwriting system presents possible next words as a list and allows the user to select one to skip manual writing. Since it is not clear if people are willing to use prediction, we first run a user study to compare handwriting and selecting from the list. The result shows that, in Japanese, people prefer to select, especially when the expected performance gain from using selection is large. Based on these observations, we designed a multimodal input system, called speech-pen, that assists digital writing during lectures or presentations with background speech and handwriting recognition. The system recognizes speech and handwriting in the background and provides the instructor with predictions for further writing. The speech-pen system also allows the sharing of context information for predictions among the instructor and the audience; the result of the instructor's speech recognition is sent to the audience to support their own note-taking. Our preliminary study shows the effectiveness of this system and the implications for further improvements.

© All rights reserved Kurihara et al. and/or ACM Press

 
Edit | Del

Ijiri, Takashi, Owada, Shigeru and Igarashi, Takeo (2006): Seamless Integration of Initial Sketching and Subsequent Detail Editing in Flower Modeling. In Comput. Graph. Forum, 25 (3) pp. 617-624.

2005
 
Edit | Del

Kurihara, Kazutaka, Vronay, David and Igarashi, Takeo (2005): Flexible timeline user interface using constraints. In: Proceedings of ACM CHI 2005 Conference on Human Factors in Computing Systems 2005. pp. 1581-1584.

Authoring tools routinely include a timeline representation to allow the author to specify the sequence of animations and interactions. However, traditional static timelines are best suited for static, linear sequences (such MIDI sequencers) and do not lend themselves to interactive content. This forces authors to supplement their timelines with scripted actions which are not represented. Timelines also force frame-accuracy on the author, which interferes with rapid exploration of different designs. We present a redesign of the timeline in which users can specify the relative ordering and causality of events without specifying exact times or durations. This effectively enables users to "work rough" in time. We then implement a prototype and perform a user study to investigate its efficiency.

© All rights reserved Kurihara et al. and/or ACM Press

 
Edit | Del

Nishida, T. and Igarashi, Takeo (2005): Lock-on-Chat: Boosting Anchored Conversation and Its Operation at a Technical Conference. In: Proceedings of IFIP INTERACT05: Human-Computer Interaction 2005. pp. 970-973.

This paper introduces a text-based chat system designed to support conversations anchored to specific locations of shared images and reports our experience in operating it at a technical conference. Our system is unique in that it focuses on supporting communications scattered around among multiple images, while other systems for anchored conversations are designed for deeper discussions within a single document. Our system was used in a technical conference as a space for anchored conversations over presentation slides and we observed that audiences actively participated in discussions during the presentation. The detailed chat log was also useful for both audiences and presenters.

© All rights reserved Nishida and Igarashi and/or Springer Verlag

 
Edit | Del

Okabe, Makoto, Owada, Shigeru and Igarashi, Takeo (2005): Interactive Design of Botanical Trees using Freehand Sketches and Example-based Editing. In Comput. Graph. Forum, 24 (3) pp. 487-496.

 
Edit | Del

Owada, Shigeru, Nielsen, Frank and Igarashi, Takeo (2005): Volume catcher. In: Lastra, Anselmo, Olano, Marc, Luebke, David P. and Pfister, Hanspeter (eds.) Proceedings of the 2005 Symposium on Interactive 3D Graphics, SI3D 2005, April 3-6, 2005, Washington, DC, USA 2005. pp. 111-116.

2003
 
Edit | Del

Kobayashi, Makoto and Igarashi, Takeo (2003): Considering the direction of cursor movement for efficient traversal of cascading menus. In: Proceedings of the 16th annural ACM Symposium on User Interface Software and Technology November, 2-5, 2003, Vancouver, Canada. pp. 91-94.

Cascading menus are commonly seen in most GUI systems. However, people sometimes choose the wrong items by mistake, or become frustrated when submenus pop up unnecessarily. This paper proposes two methods for improving the usability of cascading menus. The first uses the direction of cursor movement to change the menu behavior: horizontal motion opens/closes submenus, while vertical motion changes the highlight within the current menu. This feature can reduce cursor movement errors. The second causes a submenu to pop up at the position where horizontal motion occurs. This is expected to reduce the length of the movement path for menu traversal. A user study showed that our methods reduce menu selection times, shorten search path lengths, and prevent unexpected submenu appearance and disappearance.

© All rights reserved Kobayashi and Igarashi and/or ACM Press

 
Edit | Del

Igarashi, Takeo and Hughes, John F. (2003): Smooth meshes for sketch-based freeform modeling. In: SI3D 2003 2003. pp. 139-142.

2002
 
Edit | Del

Igarashi, Takeo and Hughes, John F. (2002): Clothing manipulation. In: Beaudouin-Lafon, Michel (ed.) Proceedings of the 15th annual ACM symposium on User interface software and technology October 27-30, 2002, Paris, France. pp. 91-100.

This paper presents interaction techniques (and the underlying implementations) for putting clothes on a 3D character and manipulating them. The user paints freeform marks on the clothes and corresponding marks on the 3D character; the system then puts the clothes around the body so that corresponding marks match. Internally, the system grows the clothes on the body surface around the marks while maintaining basic cloth constraints via simple relaxation steps. The entire computation takes a few seconds. After that, the user can adjust the placement of the clothes by an enhanced dragging operation. Unlike standard dragging where the user moves a set of vertices in a single direction in 3D space, our dragging operation moves the cloth along the body surface to make possible more flexible operations. The user can apply pushpins to fix certain cloth points during dragging. The techniques are ideal for specifying an initial cloth configuration before applying a more sophisticated cloth simulation.

© All rights reserved Igarashi and Hughes and/or ACM Press

2001
 
Edit | Del

Igarashi, Takeo and Hughes, John F. (2001): Voice as sound: using non-verbal voice input for interactive control. In: Marks, Joe and Mynatt, Elizabeth D. (eds.) Proceedings of the 14th annual ACM symposium on User interface software and technology November 11 - 14, 2001, Orlando, Florida. pp. 155-156.

We describe the use of non-verbal features in voice for direct control of interactive applications. Traditional speech recognition interfaces are based on an indirect, conversational model. First the user gives a direction and then the system performs certain operation. Our goal is to achieve more direct, immediate interaction like using a button or joystick by using lower-level features of voice such as pitch and volume. We are developing several prototype interaction techniques based on this idea, such as "control by continuous voice", "rate-based parameter control by pitch," and "discrete parameter control by tonguing." We have implemented several prototype systems, and they suggest that voice-as-sound techniques can enhance traditional voice recognition approach.

© All rights reserved Igarashi and Hughes and/or ACM Press

 
Edit | Del

Igarashi, Takeo and Hughes, John F. (2001): A suggestive interface for 3D drawing. In: Marks, Joe and Mynatt, Elizabeth D. (eds.) Proceedings of the 14th annual ACM symposium on User interface software and technology November 11 - 14, 2001, Orlando, Florida. pp. 173-181.

This paper introduces a new type of interface for 3D drawings that improves the usability of gestural interfaces and augments typical command-based modeling systems. In our suggestive interface, the user gives hints about a desired operation to the system by highlighting related geometric components in the scene. The system then infers possible operations based on the hints and presents the results of these operations as small thumbnails. The user completes the editing operation simply by clicking on the desired thumbnail. The hinting mechanism lets the user specify geometric relations among graphical components in the scene, and the multiple thumbnail suggestions make it possible to define many operations with relatively few distinct hint patterns. The suggestive interface system is implemented as a set of suggestion engines working in parallel, and is easily extended by adding customized engines. Our prototype 3D drawing system, Chateau, shows that a suggestive interface can effectively support construction of various 3D drawings.

© All rights reserved Igarashi and Hughes and/or ACM Press

 
Edit | Del

Igarashi, Takeo and Cosgrove, Dennis (2001): Adaptive unwrapping for interactive texture painting. In: SI3D 2001 2001. pp. 209-216.

2000
 
Edit | Del

Edwards, W. Keith, Igarashi, Takeo, LaMarca, Anthony and Mynatt, Elizabeth D. (2000): A Temporal Model for Multi-Level Undo and Redo. In: Ackerman, Mark S. and Edwards, Keith (eds.) Proceedings of the 13th annual ACM symposium on User interface software and technology November 06 - 08, 2000, San Diego, California, United States. pp. 31-40.

 
Edit | Del

Igarashi, Takeo and Hinckley, Ken (2000): Speed-Dependent Automatic Zooming for Browsing Large Documents. In: Ackerman, Mark S. and Edwards, Keith (eds.) Proceedings of the 13th annual ACM symposium on User interface software and technology November 06 - 08, 2000, San Diego, California, United States. pp. 139-148.

 
Edit | Del

Igarashi, Takeo, Edwards, W. Keith, LaMarca, Anthony and Mynatt, Elizabeth D. (2000): An Architecture for Pen-based Interaction on Electronic Whiteboards. In: Advanced Visual Interfaces 2000 2000. pp. 68-75.

 
Edit | Del

Mynatt, Elizabeth D., Igarashi, Takeo, Edwards, W. Keith and LaMarca, Anthony (2000): Designing an Augmented Writing Surface. In IEEE Computer Graphics and Applications, 20 (4) pp. 55-61.

1999
 
Edit | Del

Mynatt, Elizabeth D., Edwards, W. Keith, LaMarca, Anthony and Igarashi, Takeo (1999): Flatland: New Dimensions in Office Whiteboards. In: Altom, Mark W. and Williams, Marian G. (eds.) Proceedings of the ACM CHI 99 Human Factors in Computing Systems Conference May 15-20, 1999, Pittsburgh, Pennsylvania. pp. 346-353.

Flatland is an augmented whiteboard interface designed for informal office work. Our research investigates approaches to building an augmented whiteboard in the context of continuous, long term office use. In particular, we pursued three avenues of research based on input from user studies: techniques for the management of space on the board, the ability to flexibly apply behaviors to support varied application semantics, and mechanisms for managing history on the board. Unlike some previously reported whiteboard systems, our design choices have been influenced by a desire to support long-term, informal use in an individual office setting.

© All rights reserved Mynatt et al. and/or ACM Press

 
Edit | Del

Honda, Masaaki, Igarashi, Takeo, Tanaka, Hidehiko and Sakai, Shuichi (1999): Integrated Manipulation: Context-Aware Manipulation of 2D Diagrams. In: Zanden, Brad Vander and Marks, Joe (eds.) Proceedings of the 12th annual ACM symposium on User interface software and technology November 07 - 10, 1999, Asheville, North Carolina, United States. pp. 159-160.

Diagram manipulation in conventional CAD systems requires frequent mode switching and explicit placement of the pivot for rotation and scaling. In order to simplify this process, we propose an interaction technique called integrated manipulation, where the user can move, rotate, and scale without mode switching. In addition, the pivot for rotation and scaling automatically snaps to a contact point during moving operation. We performed a user study is performed using our prototype system and a commercial CAD system. The results showed that users could perform a diagram manipulation task much more rapidly using our technique.

© All rights reserved Honda et al. and/or ACM Press

1998
 
Edit | Del

Chang, Bay-Wei, Mackinlay, Jock D., Zellweger, Polle T. and Igarashi, Takeo (1998): A Negotiation Architecture for Fluid Documents. In: Mynatt, Elizabeth D. and Jacob, Robert J. K. (eds.) Proceedings of the 11th annual ACM symposium on User interface software and technology November 01 - 04, 1998, San Francisco, California, United States. pp. 123-132.

The information presented in a document often consists of primary content as well as supporting material such as explanatory notes, detailed derivations, illustrations, and the like. We introduce a class of user interface techniques for fluid documents that supports the reader's shift to supporting material while maintaining the context of the primary material. Our approach initially minimizes the intrusion of supporting material by presenting it as a small visual cue near the annotated primary material. When the user expresses interest in the annotation, it expands smoothly to a readable size. At the same time, the primary material makes space for the expanded annotation. The expanded supporting material must be given space to occupy, and it must be made salient with respect to the surrounding primary material. These two aspects, space and salience, are subject to a negotiation between the primary and supporting material. This paper presents the components of our fluid document techniques and describes the negotiation architecture for ensuring that the presentations of both primary and supporting material are honored.

© All rights reserved Chang et al. and/or ACM Press

 
Edit | Del

Igarashi, Takeo, Kadobayashi, Rieko, Mase, Kenji and Tanaka, Hidehiko (1998): Path Drawing for 3D Walkthrough. In: Mynatt, Elizabeth D. and Jacob, Robert J. K. (eds.) Proceedings of the 11th annual ACM symposium on User interface software and technology November 01 - 04, 1998, San Francisco, California, United States. pp. 173-174.

This paper presents an interaction technique for walkthrough in virtual 3D spaces, where the user draws the intended path directly on the scene, and the avatar automatically moves along the path. The system calculates the path by projecting the stroke drawn on the screen to the walking surface in the 3D world. Using this technique, the user can specify not only the goal position, but also the route to take and the camera direction at the goal with a single stroke. A prototype system is tested using a display-integrated tablet, and experimental results suggest that the technique can enhance existing walkthrough techniques.

© All rights reserved Igarashi et al. and/or ACM Press

 
Edit | Del

Kawachiya, Sachiko, Igarashi, Takeo, Matsuoka, Satoshi and Tanaka, Hidehiko (1998): Reduction of Overhead in Drawing Figures with Computer: Detailed Analyses of Drawing Tasks. In: Third Asian Pacific Computer and Human Interaction July 15-17, 1998, Kangawa, Japan. pp. 11-18.

 
Edit | Del

Igarashi, Takeo, Mackinlay, Jock D., Chang, Bay-Wei and Zellweger, Polle (1998): Fluid Visualization for Spreadsheet Structures. In: VL 1998 1998. pp. 118-125.

1997
 
Edit | Del

Igarashi, Takeo, Matsuoka, Satoshi, Kawachiya, Sachiko and Tanaka, Hidehiko (1997): Interactive Beautification: A Technique for Rapid Geometric Design. In: Robertson, George G. and Schmandt, Chris (eds.) Proceedings of the 10th annual ACM symposium on User interface software and technology October 14 - 17, 1997, Banff, Alberta, Canada. pp. 105-114.

We propose interactive beautification, a technique for rapid geometric design, and introduce the technique and its algorithm with a prototype system Pegasus. The motivation is to solve a problem with current drawing systems: too many complex commands and unintuitive procedures to satisfy geometric constraints. The Interactive beautification system receives the user's free stroke and beautifies it by considering geometric constraints among segments. A single stroke is beautified one after another, preventing accumulation of recognition errors or catastrophic deformation. Supported geometric constraints include perpendicularity, congruence, symmetry, etc., which were not seen in existing free stroke recognition systems. In addition, the system generates multiple candidates as a result of beautification to solve the problem of ambiguity. Using this technique, the user can draw precise diagrams rapidly satisfying geometric relations without using any editing commands. Interactive beautification is achieved by three sequential processes: 1) inferring underlining geometric constraints based on the spatial relationships among the input stroke and the existing segments, 2) generating multiple candidates by combining inferred constraints appropriately, and 3) evaluating the candidates to find the most plausible candidate and to remove the inappropriate candidates. A user study was performed using the prototype system, a commercial CAD tool, and an OO-based drawing system. The result showed that users can draw required diagrams more rapidly and more precisely using the prototype system.

© All rights reserved Igarashi et al. and/or ACM Press

1995
 
Edit | Del

Igarashi, Takeo, Matsuoka, Satoshi and Masui, Toshiyuki (1995): Adaptive Recognition of Implicit Structures in Human-Organized Layouts. In: VL 1995 1995. pp. 258-266.

 
Add publication
Show list on your website
 

Join our community and advance:

Your
Skills

Your
Network

Your
Career

 
Join our community!
 
 
 

Changes to this page (author)

23 Nov 2012: Modified
23 Nov 2012: Modified
23 Nov 2012: Modified
09 Nov 2012: Modified
09 Nov 2012: Modified
07 Nov 2012: Modified
07 Nov 2012: Modified
07 Nov 2012: Modified
07 Nov 2012: Modified
07 Nov 2012: Modified
07 Nov 2012: Modified
05 Apr 2012: Modified
05 Apr 2012: Modified
05 Apr 2012: Modified
04 Apr 2012: Modified
04 Apr 2012: Modified
05 Jul 2011: Modified
05 Jul 2011: Modified
03 May 2011: Modified
20 Apr 2011: Modified
18 Apr 2011: Modified
03 Nov 2010: Modified
03 Nov 2010: Modified
03 Nov 2010: Modified
03 Nov 2010: Modified
03 Nov 2010: Modified
03 Nov 2010: Modified
03 Nov 2010: Modified
02 Nov 2010: Modified
02 Nov 2010: Modified
25 Jul 2009: Modified
25 Jul 2009: Modified
21 Jul 2009: Modified
21 Jul 2009: Modified
21 Jul 2009: Modified
21 Jul 2009: Modified
12 Jul 2009: Modified
17 Jun 2009: Modified
16 Jun 2009: Modified
16 Jun 2009: Modified
05 Jun 2009: Modified
30 May 2009: Modified
30 May 2009: Modified
09 May 2009: Modified
09 May 2009: Modified
09 May 2009: Modified
12 May 2008: Modified
12 May 2008: Modified
12 May 2008: Modified
12 May 2008: Modified
12 May 2008: Modified
25 Jul 2007: Modified
24 Jul 2007: Added
29 Jun 2007: Modified
19 Jun 2007: Modified
28 Apr 2003: Added

Page Information

Page maintainer: The Editorial Team
URL: http://www.interaction-design.org/references/authors/takeo_igarashi.html

Publication statistics

Pub. period:1995-2012
Pub. count:66
Number of co-authors:92



Co-authors

Number of publications with 3 favourite co-authors:

Masahiko Inami:15
Daisuke Sakamoto:10
Kentaro Ishii:6

 

 

Productive colleagues

Takeo Igarashi's 3 most productive colleagues in number of publications:

Elizabeth D. Mynat..:71
W. Keith Edwards:62
Terry Winograd:59
 
 
 

Upcoming Courses

go to course
Gamification: Creating Addictive User Experience
Starts TODAY LAST CALL!
go to course
User-Centred Design - Module 3
68% booked. Starts in 27 days
 
 

Featured chapter

Marc Hassenzahl explains the fascinating concept of User Experience and Experience Design. Commentaries by Don Norman, Eric Reiss, Mark Blythe, and Whitney Hess

User Experience and Experience Design !

 
 

Our Latest Books

 
 
The Social Design of Technical Systems: Building technologies for communities. 2nd Edition
by Brian Whitworth and Adnan Ahmad
start reading
 
 
 
 
Gamification at Work: Designing Engaging Business Software
by Janaki Mythily Kumar and Mario Herger
start reading
 
 
 
 
The Social Design of Technical Systems: Building technologies for communities
by Brian Whitworth and Adnan Ahmad
start reading
 
 
 
 
The Encyclopedia of Human-Computer Interaction, 2nd Ed.
by Mads Soegaard and Rikke Friis Dam
start reading