Publication statistics

Pub. period:2000-2012
Pub. count:28
Number of co-authors:34



Co-authors

Number of publications with 3 favourite co-authors:

Saul Greenberg:8
Mario Costa Sousa:6
James E. Young:5

 

 

Productive colleagues

Ehud Sharlin's 3 most productive colleagues in number of publications:

Saul Greenberg:140
Takeo Igarashi:66
Mark Green:50
 
 
 
Jul 11

Creative without strategy is called ‘art‘. Creative with strategy is called ‘advertising‘

-- Jef I. Richards

 
 

Featured chapter

Marc Hassenzahl explains the fascinating concept of User Experience and Experience Design. Commentaries by Don Norman, Eric Reiss, Mark Blythe, and Whitney Hess

User Experience and Experience Design !

 
 

Our Latest Books

Kumar and Herger 2013: Gamification at Work: Designing Engaging Business Software...
by Janaki Mythily Kumar and Mario Herger

 
Start reading

Whitworth and Ahmad 2013: The Social Design of Technical Systems: Building technologies for communities...
by Brian Whitworth and Adnan Ahmad

 
Start reading

Soegaard and Dam 2013: The Encyclopedia of Human-Computer Interaction, 2nd Ed....
by Mads Soegaard and Rikke Friis Dam

 
Start reading
 
 

Help us help you!

 
 

Ehud Sharlin

Personal Homepage:
pages.cpsc.ucalgary.ca/~ehud/

Add description
Add publication

Publications by Ehud Sharlin (bibliography)

 what's this?
2012
 
Edit | Del

Young, James, Ishii, Kentaro, Igarashi, Takeo and Sharlin, Ehud (2012): Style by demonstration: teaching interactive movement style to robots. In: Proceedings of the 2012 International Conference on Intelligent User Interfaces 2012. pp. 41-50.

The style in which a robot moves, expressed through its gait or locomotion, can convey effective messages to people. For example, a robot could move aggressively in reaction to a person's actions, or alternatively react using a set of careful, submissive movements. Designing, implementing and programming robotic interfaces that react to users' actions with properly styled movements can be a difficult, daunting, and time consuming technical task. On the other hand, most people can easily perform such stylistic tasks and movements, for example, through acting them out. Following this observation, we propose to enable people to use their existing teaching skills to directly demonstrate to robots, via in-situ acting, a desired style of interaction. In this paper we present an initial style-by-demonstration (SBD) proof-of-concept of our approach, allowing people to teach a robot specific, interactive locomotion styles by providing a demonstration. We present a broomstick-robot interface for directly demonstrating locomotion style to a collocated robot, and a design critique evaluation by experienced programmers that compares our SBD approach to traditional programming methods.

© All rights reserved Young et al. and/or ACM Press

 
Edit | Del

Lapides, Paul, Sultanum, Nicole, Sharlin, Ehud and Sousa, Mario Costa (2012): Seamless mixed reality tracking in tabletop reservoir engineering interaction. In: Proceedings of the 2012 International Conference on Advanced Visual Interfaces 2012. pp. 725-728.

In this paper we present a novel mixed reality tracking system for collaborative tabletop applications that uses decorative markers and embedded application markers to create a continuous and seamless tracking space for mobile devices. Users can view and interact with mixed reality datasets on their mobile device, such as a tablet or smartphone, from distances both far and very near to the tabletop. We implement the tracking system in the context of a collaborative reservoir engineering tool that brings together many experts who need a private workspace to interact with unique datasets, which is supported by our system.

© All rights reserved Lapides et al. and/or ACM Press

2011
 
Edit | Del

Sultanum, Nicole, Somanath, Sowmya, Sharlin, Ehud and Sousa, Mario Costa (2011): "Point it, split it, peel it, view it": techniques for interactive reservoir visualization on tabletops. In: Proceedings of the 2011 ACM International Conference on Interactive Tabletops and Surfaces 2011. pp. 192-201.

Reservoir engineers rely on virtual representations of oil reservoirs to make crucial decisions relating, for example, to the modeling and prediction of fluid behavior, or to the optimal locations for drilling wells. Therefore, they are in constant pursue of better virtual representations of the reservoir models, improved user awareness of their embedded data, and more intuitive ways to explore them, all ultimately leading to more informed decision making. Tabletops have great potential in providing powerful interactive representation to reservoir engineers, as well as enhancing the flexibility, immediacy and overall capabilities of their analysis, and consequently bringing more confidence into the decision making process. In this paper, we present a collection of 3D reservoir visualization techniques on tabletop interfaces applied to the domain of reservoir engineering, and argue that these provide greater insight into reservoir models. We support our claims with findings from a qualitative user study conducted with 12 reservoir engineers, which brought us insight into our techniques, as well as a discussion on the potential of tabletop-based visualization solutions for the domain of reservoir engineering.

© All rights reserved Sultanum et al. and/or ACM Press

2010
 
Edit | Del

Harris, John and Sharlin, Ehud (2010): Exploring emotive actuation and its role in human-robot interaction. In: Proceedings of the 5th ACM/IEEE International Conference on Human Robot Interaction 2010. pp. 95-96.

In this paper, we present our research efforts in exploring the role of motion and actuation in human-robot interaction. We define Emotive Actuation, and briefly discuss its function and importance in social robotic interaction. We propose a suggested methodology for exploring Emotive Actuation in HRI, and present a robotic testbed we designed for this purpose. We conclude with informal results of a preliminary design critique we performed using our testbed.

© All rights reserved Harris and Sharlin and/or their publisher

 
Edit | Del

Saulnier, Paul, Sharlin, Ehud and Greenberg, Saul (2010): Exploring interruption in HRI using wizard of oz. In: Proceedings of the 5th ACM/IEEE International Conference on Human Robot Interaction 2010. pp. 125-126.

We are interested in exploring how robots controlled using Wizard of Oz (WoO) should interrupt humans in various social settings. While there is considerable work on interruption and interruptibility in HCI, little has been done to explore how these concepts will map robotic interaction. As part of our efforts to investigate interruption and interruptibility in HRI we used WoO-based methodology to investigate robot behaviours in a simple interruption scenario. In this report we contribute a design critique that discusses this methodology, and common concerns that could be generalized to other social HRI experiments as well as reflections on our future interruption HRI research.

© All rights reserved Saulnier et al. and/or their publisher

 
Edit | Del

Young, James E., Ishii, Kentaro, Igarashi, Takeo and Sharlin, Ehud (2010): Showing robots how to follow people using a broomstick interface. In: Proceedings of the 5th ACM/IEEE International Conference on Human Robot Interaction 2010. pp. 133-134.

Robots are poised to enter our everyday environments such as our homes and offices, contexts that present unique questions such as the style of the robot's actions. Style-oriented characteristics are difficult to define programmatically, a problem that is particularly prominent for a robot's interactive behaviors, those that must react accordingly to dynamic actions of people. In this paper, we present a technique for programming the style of how a robot should follow a person by demonstration, such that non-technical designers and users can directly create the style of following using their existing skill sets. We envision that simple physical interfaces like ours can be used by non-technical people to design the style of a wide range of robotic behaviors.

© All rights reserved Young et al. and/or their publisher

 
Edit | Del

Ng, Wai Shan (Florence) and Sharlin, Ehud (2010): Tweeting halo: clothing that tweets. In: Proceedings of the 2010 ACM Symposium on User Interface Software and Technology 2010. pp. 447-448.

People often like to express their unique personalities, interests, and opinions. This poster explores new ways that allow a user to express her feelings in both physical and virtual settings. With our Tweeting Halo, we demonstrate how a wearable lightweight projector can be used for self-expression very much like a hairstyle, makeup or a T-shirt imprint. Our current prototype allows a user to post a message physically above their head and virtually on Twitter at the same time. We also explore simple ways that will allow physical followers of the Tweeting Halo user to easily become virtual followers by simply taking a snapshot of her projected tweet with a mobile device such as a camera phone. In this extended abstract we present our current prototype, and the results of a design critique we performed using it.

© All rights reserved Ng and Sharlin and/or their publisher

2009
 
Edit | Del

Guo, Cheng, Young, James Everett and Sharlin, Ehud (2009): Touch and toys: new techniques for interaction with a remote group of robots. In: Proceedings of ACM CHI 2009 Conference on Human Factors in Computing Systems 2009. pp. 491-500.

Interaction with a remote team of robots in real time is a difficult human-robot interaction (HRI) problem exacerbated by the complications of unpredictable real-world environments, with solutions often resorting to a larger-than-desirable ratio of operators to robots. We present two innovative interfaces that allow a single operator to interact with a group of remote robots. Using a tabletop computer the user can configure and manipulate groups of robots directly by either using their fingers (touch) or by manipulating a set of physical toys (tangible user interfaces). We recruited participants to partake in a user study that required them to interact with a small group of remote robots in simple tasks, and present our findings as a set of design considerations.

© All rights reserved Guo et al. and/or ACM Press

 
Edit | Del

Saulnier, Paul, Sharlin, Ehud and Greenberg, Saul (2009): Using bio-electrical signals to influence the social behaviours of domesticated robots. In: Proceedings of the 4th ACM/IEEE International Conference on Human Robot Interaction 2009. pp. 263-264.

Several emerging computer devices read bio-electrical signals (e.g., electro-corticographic signals, skin biopotential or facial muscle tension) and translate them into computer-understandable input. We investigated how one low-cost commercially-available device could be used to control a domestic robot. First, we used the device to issue direct motion commands; while we could control the device somewhat, it proved difficult to do reliably. Second, we interpreted one class of signals as suggestive of emotional stress, and used that as an emotional parameter to influence (but not directly control) robot behaviour. In this case, the robot would react to human stress by staying out of the person's way. Our work suggests that affecting behaviour may be a reasonable way to leverage such devices.

© All rights reserved Saulnier et al. and/or ACM Press

 
Edit | Del

Marquardt, Nicolai, Young, James, Sharlin, Ehud and Greenberg, Saul (2009): Situated messages for asynchronous human-robot interaction. In: Proceedings of the 4th ACM/IEEE International Conference on Human Robot Interaction 2009. pp. 301-302.

An ongoing issue in human robot interaction (HRI) is how people and robots communicate with one another. While there is considerable work in real-time human-robot communication, fairly little has been done in asynchronous realm. Our approach, which we call situated messages, lets humans and robots asynchronously exchange information by placing physical tokens -- each representing a simple message -- in meaningful physical locations of their shared environment. Using knowledge of the robot's routines, a person can place a message token at a location, where the location is typically relevant to redirecting the robot's behavior at that location. When the robot passes nearby that location, it detects the message and reacts accordingly. Similarly, robots can themselves place tokens at specific locations for people to read. Thus situated messages leverages embodied interaction, where token placement exploits the everyday practices and routines of both people and robots. We describe our working prototype, introduce application scenarios, explore message categories and usage patterns, and suggest future directions.

© All rights reserved Marquardt et al. and/or ACM Press

 
Edit | Del

Lapides, Paul, Sharlin, Ehud and Greenberg, Saul (2009): HomeWindow: an augmented reality domestic monitor. In: Proceedings of the 4th ACM/IEEE International Conference on Human Robot Interaction 2009. pp. 323-324.

Computation is increasingly prevalent in the home: it serves as a way to control the home itself, or it is part of the many digital appliances within it. The question is: how can home inhabitants effectively understand and control the digital home? Our solution lets a person examine and control their home surroundings through a mobile display that serves as a 'magic lens', where the detail shown varies with proximity. In particular, HomeWindow is an augmented reality system that superimposes an interactive graphical interface atop of physical but digital artifacts in the home. One can get an overview of a room's computational state by looking through the display: the basic state of all digital hot spots are shown atop their physical counterparts. As one approaches a particular digital spot, more detailed information as well as a control interface is shown using a semantic zoom. Our current implementation works with two home devices. First, people can examine and remotely control the status of mobile domestic robots. Second, people can discover the power consumption of household appliances, where appliances are surrounded by a colorful aura that reflects its current and historical energy use.

© All rights reserved Lapides et al. and/or ACM Press

 
Edit | Del

Marquardt, Nicolai, Nacenta, Miguel A., Young, James E., Carpendale, Sheelagh, Greenberg, Saul and Sharlin, Ehud (2009): The Haptic Tabletop Puck: Tactile Feedback for Interactive Tabletops. In: Proceedings of Interactive Tabletops and Surfaces, Tabletop 2009, Banff, Canada. .

 
Edit | Del

Marquardt, Nicolai, Nacenta, Miguel A., Young, James E., Carpendale, Sheelagh, Greenberg, Saul and Sharlin, Ehud (2009): The Haptic Tabletop Puck: tactile feedback for interactive tabletops. In: Proceedings of the 2009 ACM International Conference on Interactive Tabletops and Surfaces 2009. pp. 85-92.

In everyday life, our interactions with objects on real tables include how our fingertips feel those objects. In comparison, current digital interactive tables present a uniform touch surface that feels the same, regardless of what it presents visually. In this paper, we explore how tactile interaction can be used with digital tabletop surfaces. We present a simple and inexpensive device -- the Haptic Tabletop Puck -- that incorporates dynamic, interactive haptics into tabletop interaction. We created several applications that explore tactile feedback in the area of haptic information visualization, haptic graphical interfaces, and computer supported collaboration. In particular, we focus on how a person may interact with the friction, height, texture and malleability of digital objects.

© All rights reserved Marquardt et al. and/or their publisher

 
Edit | Del

Marquardt, Nicolai, Nacenta, Miguel A., Young, James E., Carpendale, Sheelagh, Greenberg, Saul and Sharlin, Ehud (2009): The Haptic Tabletop Puck: the video. In: Proceedings of the 2009 ACM International Conference on Interactive Tabletops and Surfaces 2009. p. D2.

In everyday life, our interactions with objects on real tables include how our fingertips feel those objects. In comparison, current digital interactive tables present a uniform touch surface that feels the same, regardless of what it presents visually. In this video, we demonstrate how tactile interaction can be used with digital tabletop surfaces. We present a simple and inexpensive device -- the Haptic Tabletop Puck -- that incorporates dynamic, interactive haptics into tabletop interaction. We created several applications that explore tactile feedback in the area of haptic information visualization, haptic graphical interfaces, and computer supported collaboration. In particular, we focus on how a person may interact with the friction, height, texture and malleability of digital objects.

© All rights reserved Marquardt et al. and/or their publisher

 
Edit | Del

Sharlin, Ehud, Watson, Benjamin, Sutphen, Steve, Liu, Lili, Lederer, Robert and Frazer, John (2009): A tangible user interface for assessing cognitive mapping ability. In International Journal of Human-Computer Studies, 67 (3) pp. 269-278.

Wayfinding, the ability to recall the environment and navigate through it, is an essential cognitive skill relied upon almost every day in a person's life. A crucial component of wayfinding is the construction of cognitive maps, mental representations of the environments through which a person travels. Age, disease or injury can severely affect cognitive mapping, making assessment of this basic survival skill particularly important to clinicians and therapists. Cognitive mapping has also been the focus of decades of basic research by cognitive psychologists. Both communities have evolved a number of techniques for assessing cognitive mapping ability. We present the Cognitive Map Probe (CMP), a new computerized tool for assessment of cognitive mapping ability that increases consistency and promises improvements in flexibility, accessibility, sensitivity and control. The CMP uses a tangible user interface that affords spatial manipulation. We describe the design of the CMP, and find that it is sensitive to factors known to affect cognitive mapping performance in extensive experimental testing.

© All rights reserved Sharlin et al. and/or Academic Press

2008
 
Edit | Del

Guo, Cheng and Sharlin, Ehud (2008): Exploring the use of tangible user interfaces for human-robot interaction: a comparative study. In: Proceedings of ACM CHI 2008 Conference on Human Factors in Computing Systems April 5-10, 2008. pp. 121-130.

In this paper we suggest the use of tangible user interfaces (TUIs) for human-robot interaction (HRI) applications. We discuss the potential benefits of this approach while focusing on low-level of autonomy tasks. We present an experimental robotic interaction test bed to support our investigation. We use the test bed to explore two HRI-related task-sets: robotic navigation control and robotic posture control. We discuss the implementation of these two task-sets using an AIBO" robot dog. Both tasks were mapped to two different robotic control interfaces: keypad interface which resembles the interaction approach currently common in HRI, and a gesture input mechanism based on Nintendo Wii" game controllers. We discuss the interfaces implementation and conclude with a detailed user study for evaluating these different HRI techniques in the two robotic tasks-sets.

© All rights reserved Guo and Sharlin and/or ACM Press

 
Edit | Del

Watts, Cody and Sharlin, Ehud (2008): Photogeist: an augmented reality photography game. In: Inakage, Masa and Cheok, Adrian David (eds.) Proceedings of the International Conference on Advances in Computer Entertainment Technology - ACE 2008 December 3-5, 2008, Yokohama, Japan. pp. 288-291.

 
Edit | Del

Watts, Cody, Sharlin, Ehud and Woytiuk, Peter (2008): Exploring interpersonal touch in computer games. In: Inakage, Masa and Cheok, Adrian David (eds.) Proceedings of the International Conference on Advances in Computer Entertainment Technology - ACE 2008 December 3-5, 2008, Yokohama, Japan. p. 423.

 
Edit | Del

Xin, Min, Sharlin, Ehud and Sousa, Mario Costa (2008): Napkin sketch: handheld mixed reality 3D sketching. In: Feiner, Steven K., Thalmann, Daniel, Guitton, Pascal, Frhlich, Bernd, Kruijff, Ernst and Hachet, Martin (eds.) VRST 2008 - Proceedings of the ACM Symposium on Virtual Reality Software and Technology October 27-29, 2008, Bordeaux, France. pp. 223-226.

 
Edit | Del

Lapides, Paul, Sharlin, Ehud and Sousa, Mario Costa (2008): Three dimensional tangible user interface for controlling a robotic team. In: Proceedings of the 3rd ACM/IEEE International Conference on Human Robot Interaction 2008. pp. 343-350.

We describe a new method for controlling a group of robots in three-dimensional (3D) space using a tangible user interface called the 3D Tractus. Our interface maps the task space into an interactive 3D space, allowing a single user to intuitively monitor and control a group of robots. We present the use of the interface in controlling a group of virtual software bots and a physical Sony AIBO robot dog in a simulated Explosive Ordnance Disposal (EOD) environment involving a bomb hidden inside of a building. We also describe a comparative user study we performed where participants were asked to use both the 3D physical interface and a traditional 2D graphical user interface in order to try and demonstrate the benefits and drawbacks of each approach for HRI tasks.

© All rights reserved Lapides et al. and/or ACM Press

2007
 
Edit | Del

Xin, Min, Sharlin, Ehud, Sousa, Mario Costa, Greenberg, Saul and Samavati, Faramarz (2007): Purple crayon: from sketches to interactive environment. In: Inakage, Masa, Lee, Newton, Tscheligi, Manfred, Bernhaupt, Regina and Natkin, Stphane (eds.) Proceedings of the International Conference on Advances in Computer Entertainment Technology - ACE 2007 June 13-15, 2007, Salzburg, Austria. pp. 208-211.

 
Edit | Del

Watts, Cody and Sharlin, Ehud (2007): Save 'Em: physical gameplay using augmented reality techniques. In: Proceedings of the 2007 Conference on Future Play 2007. pp. 160-165.

We present Save 'Em, an augmented reality-based computer game designed to explore the challenge of making computer games more immersive and engaging by moving gameplay to the physical environment. As in the classic computer game, Lemmings, Save 'Em is based on maneuvering a group of slow-witted characters called Dudes through a treacherous maze. Using augmented reality techniques, Save 'Em places virtual game entities directly within the player's physical environment; gameplay takes place on a real game board rather than on a computer screen, and the Dudes' fate is tied directly to the player's physical actions. In this paper we discuss our Save 'Em game implementation and use our current findings to explain how moving game interaction from the virtual domain into the physical world using augmented reality can affect both gameplay and the players' overall experience.

© All rights reserved Watts and Sharlin and/or ACM Press

 
Edit | Del

Young, James E., Xin, Min and Sharlin, Ehud (2007): Robot expressionism through cartooning. In: Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction 2007. pp. 309-316.

We present a new technique for human-robot interaction called robot expressionism through cartooning. We suggest that robots utilise cartoon-art techniques such as simplified and exaggerated facial expressions, stylised text, and icons for intuitive social interaction with humans. We discuss practical mixed reality solutions that allow robots to augment themselves or their surroundings with cartoon art content. Our effort is part of what we call robot expressionism, a conceptual approach to the design and analysis of robotic interfaces that focuses on providing intuitive insight into robotic states as well as the artistic quality of interaction. Our paper discusses a variety of ways that allow robots to use cartoon art and details a test bed design, implementation, and exploratory evaluation. We describe our test bed, Jeeves, which uses a Roomba, an iRobot vacuum cleaner robot, and a mixed-reality system as a platform for rapid prototyping of cartoon-art interfaces. Finally, we present a set of interaction content scenarios which use the Jeeves prototype: trash Roomba, the recycle police, and clean tracks, as well as initial exploratory evaluation of our approach.

© All rights reserved Young et al. and/or ACM Press

2006
 
Edit | Del

Lapides, Paul, Sharlin, Ehud, Sousa, Mario Costa and Streit, Lisa (2006): The 3D Tractus: A Three-Dimensional Drawing Board. In: First IEEE International Workshop on Horizontal Interactive Human-Computer Systems Tabletop 2006 5-7 January, 2006, Adelaide, Australia. pp. 169-176.

2005
 
Edit | Del

Asano, Takeshi, Sharlin, Ehud, Kitamura, Yoshifumi, Takashima, Kazuki and Kishino, Fumio (2005): Predictive interaction using the delphian desktop. In: Proceedings of the 2005 ACM Symposium on User Interface Software and Technology 2005. pp. 133-141.

This paper details the design and evaluation of the Delphian Desktop, a mechanism for online spatial prediction of cursor movements in a Windows-Icons-Menus-Pointers (WIMP) environment. Interaction with WIMP-based interfaces often becomes a spatially challenging task when the physical interaction mediators are the common mouse and a high resolution, physically large display screen. These spatial challenges are especially evident in overly crowded Windows desktops. The Delphian Desktop integrates simple yet effective predictive spatial tracking and selection paradigms into ordinary WIMP environments in order to simplify and ease pointing tasks. Predictions are calculated by tracking cursor movements and estimating spatial intentions using a computationally inexpensive online algorithm based on estimating the movement direction and peak velocity. In testing the Delphian Desktop effectively shortened pointing time to faraway icons, and reduced the overall physical distance the mouse (and user hand) had to mechanically traverse.

© All rights reserved Asano et al. and/or ACM Press

2004
 
Edit | Del

Sharlin, Ehud, Watson, Benjamin, Kitamura, Yoshifumi, Kishino, Fumio and Itoh, Yuichi (2004): On tangible user interfaces, humans and spatiality. In Personal and Ubiquitous Computing, 8 (5) pp. 338-346.

2002
 
Edit | Del

Sharlin, Ehud, Itoh, Yuichi, Watson, Benjamin, Kitamura, Yoshifumi, Sutphen, Steve and Liu, Lili (2002): Cognitive cubes: a tangible user interface for cognitive assessment. In: Terveen, Loren (ed.) Proceedings of the ACM CHI 2002 Conference on Human Factors in Computing Systems Conference April 20-25, 2002, Minneapolis, Minnesota. pp. 347-354.

2000
 
Edit | Del

Sharlin, Ehud, Figueroa, Pablo, Green, Mark and Watson, Benjamin (2000): A Wireless, Inexpensive Optical Tracker for the CAVE(tm). In: VR 2000 2000. pp. 271-.

 
Add publication
Show list on your website
 

Join our community and advance:

Your
Skills

Your
Network

Your
Career

 
 
 
 

Changes to this page (author)

09 Nov 2012: Modified
04 Apr 2012: Modified
04 Apr 2012: Modified
02 May 2011: Modified
18 Nov 2010: Modified
03 Nov 2010: Modified
03 Nov 2010: Modified
03 Nov 2010: Modified
03 Nov 2010: Modified
03 Nov 2010: Modified
03 Nov 2010: Modified
18 Jan 2010: Added
09 Jul 2009: Modified
09 Jul 2009: Modified
09 Jul 2009: Modified
09 Jul 2009: Modified
09 Jul 2009: Modified
26 Jun 2009: Modified
16 Jun 2009: Modified
31 May 2009: Modified
30 May 2009: Modified
30 May 2009: Modified
30 May 2009: Modified
29 May 2009: Modified
09 May 2009: Modified
12 May 2008: Modified
11 Jun 2007: Modified
28 Apr 2003: Added

Page Information

Page maintainer: The Editorial Team
URL: http://www.interaction-design.org/references/authors/ehud_sharlin.html

Publication statistics

Pub. period:2000-2012
Pub. count:28
Number of co-authors:34



Co-authors

Number of publications with 3 favourite co-authors:

Saul Greenberg:8
Mario Costa Sousa:6
James E. Young:5

 

 

Productive colleagues

Ehud Sharlin's 3 most productive colleagues in number of publications:

Saul Greenberg:140
Takeo Igarashi:66
Mark Green:50
 
 
 
Jul 11

Creative without strategy is called ‘art‘. Creative with strategy is called ‘advertising‘

-- Jef I. Richards

 
 

Featured chapter

Marc Hassenzahl explains the fascinating concept of User Experience and Experience Design. Commentaries by Don Norman, Eric Reiss, Mark Blythe, and Whitney Hess

User Experience and Experience Design !

 
 

Our Latest Books

Kumar and Herger 2013: Gamification at Work: Designing Engaging Business Software...
by Janaki Mythily Kumar and Mario Herger

 
Start reading

Whitworth and Ahmad 2013: The Social Design of Technical Systems: Building technologies for communities...
by Brian Whitworth and Adnan Ahmad

 
Start reading

Soegaard and Dam 2013: The Encyclopedia of Human-Computer Interaction, 2nd Ed....
by Mads Soegaard and Rikke Friis Dam

 
Start reading
 
 

Help us help you!