Number of co-authors:7
Number of publications with 3 favourite co-authors:Pierre Dillenbourg:3Patrick Jermann:2Son Do-Lenh:2
Frédéric Kaplan's 3 most productive colleagues in number of publications:Pierre Dillenbourg:16Patrick Jermann:10Pierre-Yves Oudeye..:6
go to course
User-Centred Design - Module 3
70% booked. Starts in 25 days
Marc Hassenzahl explains the fascinating concept of User Experience and Experience Design. Commentaries by Don Norman, Eric Reiss, Mark Blythe, and Whitney Hess
User Experience and Experience Design !
Our Latest Books
The Social Design of Technical Systems: Building technologies for communities. 2nd Edition
by Brian Whitworth and Adnan Ahmad
Gamification at Work: Designing Engaging Business Software
by Janaki Mythily Kumar and Mario Herger
The Social Design of Technical Systems: Building technologies for communities
by Brian Whitworth and Adnan Ahmad
The Encyclopedia of Human-Computer Interaction, 2nd Ed.
by Mads Soegaard and Rikke Friis Dam
Current place of employment: EPFL
Frederic Kaplan graduated as an engineer of the Ecole Nationale Supérieur des Télécommunications in Paris and received a PhD degree in Artificial Intelligence from the University Paris VI. After ten years of research at the Sony Computer Science Laboratory in Paris, he supervises now a new team focusing on interactive furniture and robotic objects at the Ecole Polytechnique Fédérale de Lausanne (EPFL). In his research, he has been exploring technologies permitting to endow objects with a personal history so that they become different as we interact with them and to learn from one another, thus creating an ecosystem in perpetual evolution. With his colleague Pierre-Yves Oudeyer, he developed a new family of theoretical architectures based on the notion of intrinsic motivation systems, that has been used successfuly to create curiosity-driven machines. In parallel with his investigations in artificial intelligence, he collaborates with developmental psychologists, ethologists, neuroscientists, designers and architects. His multidisciplinary researches have been published in international scientific journals of these different fields. He is the chief editor of the multidisciplinary journal "Frontiers in Neurorobotics" publishing leading edge research in the science and technology of embodied autonomous neural systems. He also authored two popular science books in French "La naissance d'une langue chez les robots" (the birth of a language among robots) (Hermes, 2001) and "Les machines apprivoisées: comprendre les robots de loisir" (Tamed machines: understanding entertainment robots) (Vuibert, 2005).
Publications by Frédéric Kaplan (bibliography)
Cuendet, Sébastien, Bonnard, Quentin, Kaplan, Frédéric and Dillenbourg, Pierre (2011): Paper interface design for classroom orchestration. In: Proceedings of ACM CHI 2011 Conference on Human Factors in Computing Systems 2011. pp. 1993-1998.
Designing computer systems for educational purpose is a difficult task. While many of them have been developed in the past, their use in classrooms is still scarce. We make the hypothesis that this is because those systems take into account the needs of individuals and groups, but ignore the requirements inherent in their use in a classroom. In this work, we present a computer system based on a paper and tangible interface that can be used at all three levels of interaction: individual, group, and classroom. We describe the current state of the interface design and why it is appropriate for classroom orchestration, both theoretically and through two examples for teaching geometry.
© All rights reserved Cuendet et al. and/or their publisher
Kaplan, Frédéric and Jermann, Patrick (eds.) First workshop on Paper Computing, Papercomp 2010 September 25, 2010, Copenhagen, Danemark.
Kaplan, Frédéric and Jermann, Patrick (2010): PaperComp 2010: first international workshop on paper computing. In: Proceedings of the 2010 International Conference on Uniquitous Computing 2010. pp. 507-510.
Paper is not dead. Despite the progress of e-ink screens, smartphones and tablet interfaces, printed paper stays a convenient, versatile and familiar support for reading and writing. Books, magazines and other printed materials can now be connected to the digital world, enriched with additional content and even transformed into interactive interfaces. Conversely, some of the screen-based interfaces we currently use to interact with digital data could benefit from being paper-based or make use of specially designed material as light and flexible as paper. Far from a paperless world, printed documents could become ubiquitous interfaces in our everyday interaction with digital information. This is the dawn of paper computing.
© All rights reserved Kaplan and Jermann and/or their publisher
Do-Lenh, Son, Kaplan, Frédéric, Sharma, Akshit and Dillenbourg, Pierre (2009): Multi-finger interactions with papers on augmented tabletops. In: Villar, Nicolas, Izadi, Shahram, Fraser, Mike and Benford, Steve (eds.) TEI 2009 - Proceedings of the 3rd International Conference on Tangible and Embedded Interaction February 16-18, 2009, Cambridge, UK. pp. 267-274.
Do-Lenh, Son, Kaplan, Frédéric and Dillenbourg, Pierre (2009): Paper-based concept map: the effects of tabletop on an expressive collaborative learning task. In: Proceedings of the HCI09 Conference on People and Computers XXIII 2009. pp. 149-158.
Augmented tabletops have recently attracted considerable attention in the literature. However, little has been known about the effects that these interfaces have on learning tasks. In this paper, we report on the results of an empirical study that explores the usage of tabletop systems in an expressive collaborative learning task. In particular, we focus on measuring the difference in learning outcomes at individual and group levels between students using two interfaces: traditional computer and augmented tabletop with tangible input. No significant effects of the interface on individual learning gain were found. However, groups using traditional computer learned significantly more from their partners than those using tabletop interface. Further analysis showed an interaction effect of the condition and the group heterogeneity on learning outcomes. We also present our qualitative findings in terms of how group interactions and strategy differ in the two conditions.
© All rights reserved Do-Lenh et al. and/or their publisher
Kaplan, Frédéric (2009): Are gesture-based interfaces the future of human computer interaction?. In: Proceedings of the 2009 International Conference on Multimodal Interfaces 2009. pp. 239-240.
The historical evolution of human machine interfaces shows a continuous tendency towards more physical interactions with computers. Nevertheless, the mouse and keyboard paradigm is still the dominant one and it is not yet clear whether there is among recent innovative interaction techniques any real challenger to this supremacy. To discuss the future of gesture-based interfaces, I shall build on my own experience in conceiving and launching QB1, probably the first computer delivered with no mouse or keyboard but equipped with a depth-perceiving camera enabling interaction with gestures. The ambition of this talk is to define more precisely how gestures change the way we can interact with computers, discuss how to design robust interfaces adapted to this new medium and review what kind of applications benefit the most from this type of interaction. Through a series of examples, we will see that it is important to consider gestures not as a way of emulating a mouse pointer at a distance or as elements of a "vocabulary" of commands, but as a new interaction paradigm where the interface components are organized in the user's physical space. This is a shift of reference frame, from a metaphorical virtual space (e.g. the desktop) where the user controls a representation of himself (e.g. the mouse pointer) to a truly user-centered augmented reality interface where the user directly touches and manipulates interface components positioned around his body. To achieve this kind of interactivity, depth-perceiving cameras can be relevantly associated with robotic techniques and machine vision algorithms to create a "halo" of interactivity that can literally follow the user while he moves in a room. In return, this new kind of intimacy with a computer interface paves the ways for innovative machine learning approaches to context understanding. A computer like QB1 knows more about its user than any other personal computer so far. Gesture-based interaction is not a mean for replacing the mouse with cooler or more intuitive ways of interacting but leads to a fundamentally different approach to the design human-computer interfaces.
© All rights reserved Kaplan and/or his/her publisher
Oudeyer, Pierre-Yves and Kaplan, Frédéric (2007): Intrinsic Motivation Systems for Autonomous Mental Development. In IEEE Transactions on Evolutionary Computation, 11 (2) pp. 265-286.
Exploratory activities seem to be intrinsically rewarding for children and crucial for their cognitive development. Can a machine be endowed with such an intrinsic motivation system? This is the question we study in this paper, presenting a number of computational systems that try to capture this drive towards novel or curious situations. After discussing related research coming from developmental psychology, neuroscience, developmental robotics, and active learning, this paper presents the mechanism of Intelligent Adaptive Curiosity, an intrinsic motivation system which pushes a robot towards situations in which it maximizes its learning progress. This drive makes the robot focus on situations which are neither too predictable nor too unpredictable, thus permitting autonomous mental development. The complexity of the robot's activities autonomously increases and complex developmental sequences self-organize without being constructed in a supervised manner. Two experiments are presented illustrating the stage-like organization emerging with this mechanism. In one of them, a physical robot is placed on a baby play mat with objects that it can learn to manipulate. Experimental results show that the robot first spends time in situations which are easy to learn, then shifts its attention progressively to situations of increasing difficulty, avoiding situations in which nothing can be learned. Finally, these various results are discussed in relation to more complex forms of behavioral organization and data coming from developmental psychology
Keywords: active learning, autonomy, behavior, complexity, curiosity, sensorimotor development, cognitive development, developmental trajectory, epigenetic robotics, intrinsic motivation, learning, reinforcement learning, values.
© All rights reserved Oudeyer and Kaplan and/or their publisher
Join our community and advance:
Changes to this page (author)05 Jul 2011: Modified20 Apr 2011: Modified
03 Nov 2010: Modified
02 Nov 2010: Modified
11 Oct 2010: Added
01 Jun 2009: Modified
21 Nov 2007: Added
06 Nov 2007: Added
Page maintainer: The Editorial Team