Number of co-authors:6
Number of publications with 3 favourite co-authors:Masahiro Hamasaki:Hideaki Takeda:Yosuke Matsusaka:
Masataka Goto's 3 most productive colleagues in number of publications:Takeo Igarashi:66Hideaki Takeda:10Masahiro Hamasaki:8
go to course
Design Thinking: The Beginner's Guide
84% booked. Starts in 11 days
Marc Hassenzahl explains the fascinating concept of User Experience and Experience Design. Commentaries by Don Norman, Eric Reiss, Mark Blythe, and Whitney Hess
User Experience and Experience Design !
Our Latest Books
The Social Design of Technical Systems: Building technologies for communities. 2nd Edition
by Brian Whitworth and Adnan Ahmad
Gamification at Work: Designing Engaging Business Software
by Janaki Mythily Kumar and Mario Herger
The Social Design of Technical Systems: Building technologies for communities
by Brian Whitworth and Adnan Ahmad
The Encyclopedia of Human-Computer Interaction, 2nd Ed.
by Mads Soegaard and Rikke Friis Dam
Publications by Masataka Goto (bibliography)
Hamasaki, Masahiro, Goto, Masataka and Takeda, Hideaki (2011): Social Infobox: collaborative knowledge construction by social property tagging. In: Proceedings of ACM CSCW11 Conference on Computer-Supported Cooperative Work 2011. pp. 641-644. Available online
We propose a novel style of social tagging to construct knowledge collaboratively called Social Property Tagging and introduce the prototype system Social Infobox. Structured data is useful for computer system, however defining structure of knowledge for representing data semantics is usually a costly and time consuming task. In general, data structures are constructed by experts of knowledge engineering. Our method aims to construct not only structured data but also structure of data collaboratively by simple user input.
© All rights reserved Hamasaki et al. and/or their publisher
Kurihara, Kazutaka, Goto, Masataka, Ogata, Jun, Matsusaka, Yosuke and Igarashi, Takeo (2007): Presentation sensei: a presentation training system using speech and image processing. In: Massaro, Dominic W., Takeda, Kazuya, Roy, Deb and Potamianos, Alexandros (eds.) Proceedings of the 9th International Conference on Multimodal Interfaces - ICMI 2007 November 12-15, 2007, Nagoya, Aichi, Japan. pp. 358-365. Available online
Kurihara, Kazutaka, Goto, Masataka, Ogata, Jun, Matsusaka, Yosuke and Igarashi, Takeo (2007): Presentation sensei: a presentation training system using speech and image processing. In: Proceedings of the 2007 International Conference on Multimodal Interfaces 2007. pp. 358-365. Available online
In this paper we present a presentation training system that observes a presentation rehearsal and provides the speaker with recommendations for improving the delivery of the presentation, such as to speak more slowly and to look at the audience. Our system "Presentation Sensei" is equipped with a microphone and camera to analyze a presentation by combining speech and image processing techniques. Based on the results of the analysis, the system gives the speaker instant feedback with respect to the speaking rate, eye contact with the audience, and timing. It also alerts the speaker when some of these indices exceed predefined warning thresholds. After the presentation, the system generates visual summaries of the analysis results for the speaker's self-examinations. Our goal is not to improve the content on a semantic level, but to improve the delivery of it by reducing inappropriate basic behavior patterns. We asked a few test users to try the system and they found it very useful for improving their presentations. We also compared the system's output with the observations of a human evaluator. The result shows that the system successfully detected some inappropriate behavior. The contribution of this work is to introduce a practical recognition-based human training system and to show its feasibility despite the limitations of state-of-the-art speech and video recognition technologies.
© All rights reserved Kurihara et al. and/or their publisher
Kurihara, Kazutaka, Goto, Masataka, Ogata, Jun and Igarashi, Takeo (2006): Speech pen: predictive handwriting based on ambient multimodal recognition. In: Proceedings of ACM CHI 2006 Conference on Human Factors in Computing Systems 2006. pp. 851-860. Available online
It is tedious to handwrite long passages of text by hand. To make this process more efficient, we propose predictive handwriting that provides input predictions when the user writes by hand. A predictive handwriting system presents possible next words as a list and allows the user to select one to skip manual writing. Since it is not clear if people are willing to use prediction, we first run a user study to compare handwriting and selecting from the list. The result shows that, in Japanese, people prefer to select, especially when the expected performance gain from using selection is large. Based on these observations, we designed a multimodal input system, called speech-pen, that assists digital writing during lectures or presentations with background speech and handwriting recognition. The system recognizes speech and handwriting in the background and provides the instructor with predictions for further writing. The speech-pen system also allows the sharing of context information for predictions among the instructor and the audience; the result of the instructor's speech recognition is sent to the audience to support their own note-taking. Our preliminary study shows the effectiveness of this system and the implications for further improvements.
© All rights reserved Kurihara et al. and/or ACM Press
Goto, Masataka (2003): SmartMusicKIOSK: music listening station with chorus-search function. In: Proceedings of the 16th annural ACM Symposium on User Interface Software and Technology November, 2-5, 2003, Vancouver, Canada. pp. 31-40. Available online
This paper describes a new music-playback interface for trial listening,
SmartMusicKIOSK. In music stores, short trial listening of CD music is not
usually a passive experience -- customers often search out the chorus or "hook"
of a song using the fast-forward button. Listening of this type, however, has
not been traditionally supported. This research achieves a function for jumping
to the chorus section and other key parts of a song plus a function for
visualizing song structure. These functions make it easier for a listener to
find desired parts of a song and thereby facilitate an active listening
experience. The proposed functions are achieved by an automatic chorus-section
detecting method, and the results of implementing them as a listening station
have demonstrated their usefulness.
© All rights reserved Goto and/or ACM Press
Join our community and advance:
Page maintainer: The Editorial Team