38. Human-Robot Interaction

by Kerstin Dautenhahn

This chapter introduces and critically reflects upon some key challenges and open issues in Human-Robot Interaction (HRI) research. The chapter emphasizes that in order to tackle these challenges, both the user-centred and the robotics-centred aspects of HRI need to be addressed. The synthetic nature of HRI is highlighted and discussed in the context of methodological issues. Different experimental paradigms in HRI are described and compared. Furthermore, I will argue that due to the artificiality of robots, we need to be careful in making assumptions about the 'naturalness' of HRI and question the widespread assumption that humanoid robots should be the ultimate goal in designing successful HRI. In addition to building robots for the purpose of providing services for and on-behalf of people, a different direction in HRI is introduced, namely to use robots as social mediators between people. Examples of HRI research illustrate these ideas.

38.1 Background

Human-Robot Interaction (HRI) is a relatively young discipline that has attracted a lot of attention over the past few years due to the increasing availability of complex robots and people's exposure to such robots in their daily lives, e.g. as robotic toys or, to some extent, as household appliances (robotic vacuum cleaners or lawn movers). Also, robots are increasingly being developed for real world application areas, such as robots in rehabilitation, eldercare, or robots used in robot-assisted therapy and other assistive or educational applications.

This article is not meant to be a review article of HRI per se, please consult e.g. (Goodrich and Schultz, 2007; Dautenhahn, 2007a) for such surveys and discussions of the history and origins of this field. Instead, I would like to discuss a few key issues within the domain of HRI that often lead to misunderstandings or misinterpretations of research in this domain. The chapter will not dwell into technical details but focus on interdisciplinary aspects of this research domain in order to inspire innovative new research that goes beyond traditional boundaries of established disciplines.

Researchers may be motivated differently to join the field HRI. Some may be roboticists, working on developing advanced robotic systems with possible real-world applications, e.g. service robots that should assist people in their homes or at work, and they may join this field in order to find out how to handle situations when these robots need to interact with people, in order to increase the robots' efficiency. Others may be psychologists or ethologists and take a human-centred perspective on HRI; they may use robots as tools in order to understand fundamental issues of how humans interact socially and communicate with others and with interactive artifacts. Artificial Intelligence and Cognitive Science researchers may join this field with the motivation to understand and develop complex intelligent systems, using robots as embodied instantiations and testbeds of those.

Last but not least, a number of people are interested in studying the interaction of people and robots, how people perceive different types and behaviours of robots, how they perceive social cues or different robot embodiments, etc. The means to carry out this work is usually via 'user studies'. Such work has often little technical content; e.g. it may use commercially available and already fully programmed robots, or research prototypes showing few behaviours or being controlled remotely (via the Wizard-of-Oz approach whereby a human operator, unknown to the participants, controls the robot), in order to create very constrained and controlled experimental conditions. Such research strongly focuses on humans' reactions and attitudes towards robots. Research in this area typically entails large-scale evaluations trying to find statistically significant results. Unfortunately this area of 'user studies', which is methodologically heavily influenced by experimental psychology and human-computer interaction (HCI) research, is often narrowly equated with the field of "HRI". "Shall we focus on the AI and technical development of the robot or shall we do HRI"? is not an uncommon remark heard in research discussions. This tendency to equate HRI with 'user studies ' is in my view very unfortunate, and it may in the long run sideline HRI and transform this field into a niche-domain. HRI as a research domain is a synthetic science, and it should tackle the whole range of challenges from technical, cognitive/AI to psychological, social, cognitive and behavioural.

38.2 HRI - a synthetic, not a natural science

HRI is a field that has emerged during the early 1990s and has been characterized as:

"Human—Robot Interaction (HRI) is a field of study dedicated to understanding, designing, and evaluating robotic systems for use by or with humans "(Goodrich and Schultz, 2007, p. 204).

What is Human-robot interaction (HRI) and what does it try to achieve?

"The HRI problem is to understand and shape the interactions between one or more humans and one or more robots" (Goodrich and Schultz, 2007, p. 216).

The characterization of the fundamental HRI problem given above focuses on the issues of understanding what happens between robots and people, and how these interactions can be shaped, i.e. influenced, improved towards a certain goal etc.

The above view implicitly assumes a reference point of what is meant by "robot". The term is often traced back to the Czechoslovakian word robota (work), and its first usage is attributed to Karel Capek's play R.U.R.: Rossum's Universal Robots (1920). However, the term "robot" is far from clearly defined. Many technical definitions are available concerning its motor, sensory and cognitive functionalities, but little is being specified about the robot's appearance, behaviour and interaction with people. As it happens, if a non-researcher interacts with a robot that he or she has never encountered before, then what matters is how the robot looks, what it does, and how it interacts and communicates with the person. The 'user' in such a context will not care much about the cognitive architecture that has been implemented, or the programming language that has been used, or the details of the mechanical design.

Behaviours and appearances of robots have dramatically changed since the early 1990s, and they continue to change — new robots appearing on the market, other robots becoming obsolete. The design range of robot appearances is huge, ranging from mechanoid (mechanical-looking) to zoomorphic (animal-looking robots) to humanoid (human-like) machines as well as android robots at the extreme end of human-likeness. Similarly big is the design space of robot appearance, behaviour and their cognitive abilities. Most robots are unique designs, their hardware and often software may be incompatible with other robots or even previous versions of the same robot. Thus, robots are generally discrete, isolated systems, they have not evolved in the same way as natural species have evolved, they have not adapted during evolution to their environments. When biological species evolve, new generations are connected to the previous generations in non-trivial ways; in fact, one needs to know the evolutionary history of a species in order to fully appreciate its morphology, biology, behaviour and other features. Robots are designed by people, and are programmed by people. Even for robots that are learning, they have been programmed how and when to learn. Evolutionary approaches to robots' embodiment and control (Nolfi and Floreano, 2000; Harvey et al., 2005) and developmental approaches to the development of a robot's social and cognitive abilities (Lungarella et al., 2003; Asada et al., 2009; Cangelosi et al., 2010; Vernon et al., 2011; Nehaniv et al., 2013) may one day create a different situation, but at present, robots used in HRI are human-designed systems. This is very different from ethology, experimental psychology etc. which study biological systems. To give an example, in 1948 Edward C. Tolman wrote his famous article "Cognitive Maps in Rats and Men". Still today his work is among the key cited articles in research on navigation and cognitive maps in humans and other animals. Rats and people are still the same two species; they have since 1948 not transformed into completely different organisms, results gained in 1948 can still be compared with results obtained today. In contrast, the robots that were available in the early 1990s and today's robots do not share a common evolutionary history; they are just very different robotic 'species'.

Thus, what we mean by 'robot' today will be very different from what we mean by 'robot' in a hundreds of year time. The concept of robot is a moving target, we constantly reinvent what we consider to be 'robot'. Studying interactions with robots and gaining general insights into HRI applicable across different platforms is therefore a big challenge. Focusing only on the 'H' in HRI, 'user studies' , i.e. the human perspective, misses the important 'R', the robot component, the technological and robotics characteristics of the robot. Only a deep investigation of both aspects will eventually illuminate the illusive 'I', the interaction that emerges when we put people and interactive robots in a shared context. In my perspective, the key challenge and characterization of HRI can be phrased as follows:

"HRI is the science of studying people's behaviour and attitudes towards robots in relationship to the physical, technological and interactive features of the robots, with the goal to develop robots that facilitate the emergence of human-robot interactions that are at the same time efficient (according to the original requirements of their envisaged area of use), but are also acceptable to people, and meet the social and emotional needs of their individual users as well as respecting human values".

38.3 HRI - methodological issues

As discussed in the previous section, the concept of 'robot' is a moving target. Thus, different from the biological sciences, research in HRI is suffering from not being able to compare results directly from studies using different types of robots. Ideally, one would like to carry out every HRI experiments with a multitude of robots and corresponding behaviours — which is practically impossible.

Let us consider a thought experiment and assume our research question is to investigate how a cylindrically shaped mobile robot should approach a seated person and how the robot's behaviour and appearance influences people's reactions. The robot will be programmed to carry a bottle of water, approach the person from a certain distance, stop at a certain distance in the vicinity of the person, orient its front (or head) towards the person and say "Would you like a drink?". Video cameras record people's reactions to the robot, and after the experiment they complete a questionnaire on their views and experiences of the experiment. Note, there is in fact no bi-directional interaction involved, the person is mainly passive. The scenario has been simplified this way to be able to test different conditions. We only consider three values for each category, i.e. no continuous values. Despite these gross simplifications, as indicated in table 1 below, we will end up with 37 = 2187 combinations and possible experimental conditions to which we may expose participants to. For each condition we need a number, X, of participants, in order to satisfy statistical constraints. Each session, if kept to a very minimal scenario, will take at least 15 minutes, plus another 15 minutes for the introduction, debriefing, questionnaires/interviews, as well as signing of consent forms etc. Note, more meaningful HRI scenarios, e.g. those we conduct in our Robot House described below, typically involve scheduling one full hour for each participant per session. Since people's opinions of and behaviours towards robots is likely to change in long-term interactions, each person should be exposed to the same condition 5 times, which gives 10935 different sessions. Also, the participants need to be chosen carefully, ideally one would also consider possible age and gender differences,as well as personality characteristics and other individual differences — which means repeating the experiment with different groups of participants. Regardless of whether we expose one participant to all conditions, or we choose different participants for each condition, getting sufficient data for meaningful statistical analysis will clearly be impractical. We end up with about 328050 * X minutes required for the experiment, not considering situations where the experiment has to be interrupted due to a system's failure, rescheduling of appointments for participants etc. Clearly, running such an experiment is impractical, and not desirable, given that only minimal conditions are being addressed, so results from this experiment would necessarily be very limited and effort certainly not worthwhile.

Features

Height

2m

1m

50cm

Speed

Fast

medium

slow

Voice

Human-like

Robot-like

none

Colour of body

Red

blue

white

Approach distance to person

Close

medium

far

Approach direction to person

Frontal approach

Side approach

Side-back approach

Head

Head with human-like features

Mechanical head

No head

...

Table 38.1: HRI thought experiment.

Given these constraints, a typical HRI experiment simplifies to an even greater extent. The above study could limit itself to a short and tall robot and two different approach distances, resulting in 4 experimental conditions. The results would indicate how robot height influences people's preferred approach distances but only in a very limited sense, since all other features would have to be held constant, i.e. the robot's appearance (apart from height), speed, voice, colour, approach direction, head feature, etc. would be chosen once and then kept constant for the whole experiment. Thus, any results from our hypothetical experiment would not allow us to extrapolate easily to other robot designs and behaviour, or other user groups. Robots are designed artifacts, and they are a moving target; what we consider to be a typical 'robot' today will probably be very different from what people in 200 years consider to be a "robot". So will the results we have gained over the past 15 or 20 years still be applicable to tomorrow's robots?

As I have pointed out previously (Dautenhahn, 2007b) HRI is often compared to other experimental sciences, such as ethology and in particular experimental, or even clinical psychology. And indeed, quantitative methods used in these domains often provide valuable guidelines and sets of established research methods that are used to design and evaluate HRI experiments, typically focusing on quantitative, statistical methods requiring large-scale experiments, i.e. involving large sample sizes of participants, and typically one or more control conditions. Due the nature of this work the studies are typically short-term, exposing participants to a particular condition only once or a few times. Textbooks on research methods in experimental psychology can provide guidelines for newcomers to the field. However, there is an inherent danger if such approaches are taken as the gold standard for HRI research, i.e. if any HRI study is measured against it. This is very unfortunate since in fact, many methodological approaches exist that provide different, but equally valuable insights into human-robot interaction. Such qualitative methods may include in-depth, long-term case studies where individual participants are exposed to robots over an extensive period of time. The purpose of such studies is more focused on the actual meaning of the interaction, the experience of the participants, any behavioural changes that may occur and changes in participants' attitudes towards the robots or the interaction. Such approaches often lack control conditions but analyse in great detail interactions over a longer period of time. Other approaches, e.g. conversation-analytic methods (Dickerson et al., 2013; Rossano et al., 2013) may analyse in depth the detailed nature of the interactions and how interaction partners respond and attend to each other and coordinate their actions.

In the field of assistive technology and rehabilitation robotics, where researchers develop robotic systems for specific user groups, control conditions with different user groups are usually not required: if one develops systems to assist or rehabilitate people with motor impairments after a stroke, design aids to help visually impaired people, or develop robotic technology to help with children with autism learn about social behaviour and communication, contrasting their use of a robotic system with how healthy/neurotypical people may use the same system does not make much sense. We already know about the specific impairments of our user groups, and the purpose of such work is not to highlight again how they differ from healthy/neurotypical people. Also, often the diversity of responses within the target user group is of interest. Thus, in this domain, control groups only make sense if those systems are meant to be used for different target user groups, and so comparative studies can highlight how each of them would use and could benefit (or not) from such a system. However, most assistive, rehabilitative systems are especially designed for people with special needs, in which case control conditions with different user groups are not necessarily useful.

Note, an important part of control conditions in assistive technology is to test different systems or different versions of the same system in different experimental conditions. Such comparisons are important since they a) allow gaining data to further improve the system, and b) can highlight the added value of an assistive system compared to other conventional systems or approaches. For example, Werry and Dautenhahn (2007) showed that an interactive, mobile robot engages children with autism better than a non-robotic conventional toy.

A physician or physiotherapist may use robotic technology in order to find out about the nature of a particular medical condition or impairment, e.g. to find out about the nature of motor impairment after stroke, and may use an assessment robot to be tested with both healthy people and stroke patients. Similarly, a psychologist may study the nature of autism by using robotic artefacts, comparing,e.g. how children respond to social cues, speech or tactile interaction. Such artifacts would be tools in the research on the nature of the disorder or disability, rather than an assistive tool built to assist the patients — which means it would also have to take into consideration the patient's individual differences, likes and dislikes and preferences in the context of using the tool.

Developing complex robots for human-robot interaction requires substantial amount of resources in terms of researchers, equipment, know-how, funding and it is not uncommon that the development of such a robot may take years until it is fully functioning. Examples of this are the robot 'butler' Care-O-bot® 3 (Parlitz et al, 2008; Reiser et al., 2013, cf. Fig. 1) whose first prototype was first developed as part of the EU FP6 project COGNIRON (2004-2008), or the iCub robot (Metta et al. 2010,Fig. 2) developed from 2004-2008 as part of the 5.5-year FP6 project Robotcub. Both robots are still under development and upgraded regularly. The iCub was developed as a research platform for developmental and cognitive robotics by a large consortium, concluding several European partners developing the hardware and software of the robot. Another example is the IROMEC platform that was developed from 2006-2009 as part of the FP6 project IROMEC , Fig. 3. The robot has been developed as a social mediator for children with special needs who can learn through play. Results of the IROMEC project do not only include the robotic platform, but also a framework for developing scenarios for robot-assisted play (Robins et al., 2010), and a set of 12 detailed play scenarios that the Robot-Assisted Therapy (RAT) community can use according to specific developmental and educational objectives for each child (Robins et al., 2012). In the IROMEC project a dedicated user-centred design approach was taken (Marti and Bannon, 2009; Robins et al. 2010), however time ran out at the end of the project to do a second design cycle in order to modify the platform based on trials with the targeted end-users. Such modifications would have been highly desirable, since interactions between users and new technology typically illuminate issues that have not been considered initially. In the case of the iCub the robot was developed initially as a new cognitive systems research robotics platform, so no concrete end users were envisaged. In the case of the Care-O-bot® three professional designers were involved in order to derive a 'friendly' design (Parlitz et al., 2008).

Courtesy of Kerstin Dautenhahn. Copyright: CC-Att-SA (Creative Commons Attribution-ShareAlike 3.0 Unported).

Courtesy of Kerstin Dautenhahn. Copyright: CC-Att-SA (Creative Commons Attribution-ShareAlike 3.0 Unported).

Figure 38.1 A-B: The Care-O-bot® 3 robot in the UH Robot House, investigating robot assistance for elderly users as part of the ACCOMPANY project (2011, ongoing). See a video (http://www.youtube.com/watch?v=qp47BPw__9M). The Robot House is based off-campus in a residential area, and is a more naturalistic environment for the study of home assistance robots than laboratory settings, cf. Figure 6. Bringing HRI into natural environments poses many challenges but also opportunities (e.g. Sabanovic et al. 2006; Kanda et al., 2007; Huttenrauch et al. 2009; Kidd and Breazeal, 2008; Kanda et al. 2010; Dautenhahn, 2007; Woods et al., 2007; Walters et al., 2008).

The iCub (2013) humanoid open course platform, developed as part of the Robotcub project (2013).

Courtesy of Kerstin Dautenhahn. Copyright: CC-Att-SA (Creative Commons Attribution-ShareAlike 3.0 Unported).

Figure 38.2: The iCub (2013) humanoid open course platform, developed as part of the Robotcub project (2013).

The IROMEC robot which was developed as part of the IROMEC project (2013).

Courtesy of Kerstin Dautenhahn. Copyright: CC-Att-SA (Creative Commons Attribution-ShareAlike 3.0 Unported).

Figure 38.3: The IROMEC robot which was developed as part of the IROMEC project (2013).

Thus, designing robots for HRI 'properly', i.e. involving users in the design and ensuring that the to be developed robot fulfills its targeted roles and functions and provides positive user experience remains a difficult task (Marti and Bannon, 2009). A number of methods are thus used to gain input and feedback from users before the completion of a fully functioning robot prototype, see Fig. 4. Fig. 5 provides a conceptual comparison of these different prototyping approaches and experimental paradigms.

Modified from Dautenhahn (2007b), sketching a typical development time line of HRI robots and showing different experimental paradigms. The dark arrows indicate that for those periods the particular

Courtesy of Kerstin Dautenhahn. Copyright status: Unknown (pending investigation). See section "Exceptions" in the copyright terms below.

Figure 38.4: Modified from Dautenhahn (2007b), sketching a typical development time line of HRI robots and showing different experimental paradigms. The dark arrows indicate that for those periods the particular experimental method is more useful than during other periods. Note, there are typically several iterations in the development process (not shown in the diagram), since systems may be improved after feedback from user studies with the complete prototype. Also, several releases of different systems may result, based on feedback from deployed robots after a first release to the user/scientific community.

Conceptual Comparison of Different Experimental Paradigms discussed in this chapter. TR (Theatrical Robot), VHRI (Video-based HRI), THRI (Theatre-based HRI), SISHIR (Situated Interactive Simulated HRI

Courtesy of Kerstin Dautenhahn. Copyright: CC-Att-SA (Creative Commons Attribution-ShareAlike 3.0 Unported).

Figure 38.5: Conceptual Comparison of Different Experimental Paradigms discussed in this chapter. TR (Theatrical Robot), VHRI (Video-based HRI), THRI (Theatre-based HRI), SISHIR (Situated Interactive Simulated HRI), Live HRI. Resource efficiency means that experiments need to yield relevant results quickly and cheaply (in terms of effort, equipment required, person months etc.). Outcome-relative fidelity means that outcomes of the study must be sufficiently trustworthy and accurate to support potentially costly design decisions taken based on the results (Derbinsky et al. 2013).

Even before a robot prototype exists, in order to support the initial phase of planning and specification of the system, mock-up models might be used, see e.g. Bartneck and Jun 2004. Once a system's main hardware and basic control software has been developed, and safety standards are met, first interaction studies with participants may begin.

The above mentioned Wizard-of-Oz technique (WoZ) is a popular evaluation technique that originated in HCI (Gould et al, 1983; Dahlback et al., 1993; Maulsby et al,. 1993) and is now widely used in HRI research (Green et al. 2004, Koay et al., 2008; Kim et al., 2012). In order to carry out WoZ studies, a prototype version must be available that can be remotely controlled, unknown to the participants. Thus, WoZ is often used in cases where the robot's hardware has been completed but the robot's sensory, motor or cognitive abilities are still limited. However, having one or two researchers remotely controlling the robot's movements and/or speech can be cognitively demanding and impractical in situations where the goal is that the robot eventually should operate autonomously. For example, in a care, therapy or educational context, remotely controlling a robot require another researcher and/or care staff member to be available (cf. Kim et al., 2013). WoZ can be used for full teleoperation or for partial control, e.g. to simulate the high-level decision-making progress of the robot. See Fig. 6 for an example of an HRI experiment using WoZ.

Courtesy of Kerstin Dautenhahn. Copyright: CC-Att-SA (Creative Commons Attribution-ShareAlike 3.0 Unported).

Courtesy of Kerstin Dautenhahn. Copyright: CC-Att-SA (Creative Commons Attribution-ShareAlike 3.0 Unported).

Courtesy of Kerstin Dautenhahn. Copyright: CC-Att-SA (Creative Commons Attribution-ShareAlike 3.0 Unported).

Courtesy of Kerstin Dautenhahn. Copyright: CC-Att-SA (Creative Commons Attribution-ShareAlike 3.0 Unported).

Courtesy of Kerstin Dautenhahn. Copyright: CC-Att-SA (Creative Commons Attribution-ShareAlike 3.0 Unported).

Courtesy of Kerstin Dautenhahn. Copyright: CC-Att-SA (Creative Commons Attribution-ShareAlike 3.0 Unported).

Courtesy of Kerstin Dautenhahn. Copyright: CC-Att-SA (Creative Commons Attribution-ShareAlike 3.0 Unported).

Figure 38.6: a) Two researchers controlling movement and speech of a robot used in a (simulated) home companion environment (b). 28 subjects interacted with the robot in physical assistance tasks (c), and they also had to negotiate space with the robot (d), e) layout of experimental area for WoZ study. The study was performed in 2004 as part of the EU project COGNIRON. Dautenhahn (2007a), Woods et al. (2007), Koay et al. (2006) provide some results from these human-robot interaction studies using a WoZ approach.

Once WoZ experiments are technically feasible, video-based methods can be applied whereby typically groups of participants are shown videos of the robots interacting with people and their environments. The VHRI (Video-based HRI) methodology has been used successfully in a variety of HRI studies (Walters et al., 2011; Severinson-Eklund, 2011; Koay et al. 2007, 2011; Syrdal et al., 2010; Lohse et al., 2008; Syrdal et al., 2008). Previous studies compared live HRI and video-based HRI and found comparable results in a setting where a robot approached a person (Woods et al., 2006a,b). However, in the scenarios that were used for the comparative study there was little dynamic interaction and co-ordination between the robot's and the person's behaviour. It can be expected that the higher the contingency and co-ordination between human and robot interaction, the less likely VHRI is to simulate live interaction experience (cf. Figure 7).

Illustration of decrease of suitability of the Video HRI method with increasing contingency of the interaction (e.g. verbal or non-verbal coordination among the robot and the human in interaction).

Courtesy of Kerstin Dautenhahn. Copyright: CC-Att-SA (Creative Commons Attribution-ShareAlike 3.0 Unported).

Figure 38.7: Illustration of decrease of suitability of the Video HRI method with increasing contingency of the interaction (e.g. verbal or non-verbal coordination among the robot and the human in interaction).

Another prototyping method that has provided promising results is the Theatrical Robot (TR) method that can be used in instances where a robot is not yet available, but where live human—robot interaction studies are desirable, for example see Fig. 8. The Theatrical Robot describes a person (a professional such as an actor, or mime artist) dressed up as a robot and behaving according to a specific and pre-scripted robotic behaviour repertoire. Thus, the Theatrical Robot can serve as a life-sized, embodied, simulated robot that can simulate human-like behaviour and cognition. Robins et al. (2004) have used this method successfully in studies which tried to find out how children with autism react to life-sized robots, and how this reaction depends on whether the robot looks like a person or looks like a robot. The small group of four children studied showed strong initial preferences for the Theatrical Robot in its robotic appearance, compared to the Theatrical Robot showing the same (robotic) behaviour repertoire but dressed as a human being, see example results in Figure 8. Note, in both conditions the 'robot' was trained to not to respond to the children. In the Robins et al. (2004) study a mime artist was used in order to ensure that the TR was able to precisely and reliable control his behaviour during the trials.

The Theatrical Robot paradigm allows us to conduct user studies from an very early phase of planning of the robotic system. Once working prototypes exist the TR method is less likely to be useful since now studies can be run with a 'real' system. However, the TR can also be used as a valuable method on its own, in terms of investigating how people react to other people depending on their appearance, or how people would react to a robot that looks and behaves very human-like. Building robots that truly look and behave like human beings is still a future goal, although Android robots can simulate appearance, they lack human-like movements, behaviour and cognition (MacDorman, Ishiguro, 2006). Thus, the TR can shortcut the extensive development process and allow us to make predictions of how people may react to highly human-like robots.

Courtesy of Kerstin Dautenhahn. Copyright: CC-Att-SA (Creative Commons Attribution-ShareAlike 3.0 Unported).

Courtesy of Kerstin Dautenhahn. Copyright: CC-Att-SA (Creative Commons Attribution-ShareAlike 3.0 Unported).

Courtesy of Kerstin Dautenhahn. Copyright: CC-Att-SA (Creative Commons Attribution-ShareAlike 3.0 Unported).

Courtesy of Kerstin Dautenhahn. Copyright: CC-Att-SA (Creative Commons Attribution-ShareAlike 3.0 Unported).

Courtesy of Kerstin Dautenhahn. Copyright: CC-Att-SA (Creative Commons Attribution-ShareAlike 3.0 Unported).

Courtesy of Kerstin Dautenhahn. Copyright: CC-Att-SA (Creative Commons Attribution-ShareAlike 3.0 Unported).

Courtesy of Kerstin Dautenhahn. Copyright: CC-Att-SA (Creative Commons Attribution-ShareAlike 3.0 Unported).

Courtesy of Kerstin Dautenhahn. Copyright: CC-Att-SA (Creative Commons Attribution-ShareAlike 3.0 Unported).

Figure 38.8: Using the Theatrical Robot paradigm in a study that investigated children with autism's responses to a human-sized robot either dressed as a robot (plain appearance a), or as a human person (human appearance),showing identical behaviour in both conditions. In b, c, d responses of three children in both experimental conditions are shown. An example of results showing gaze behaviour of the children towards the TR is shown in e).

In addition to prototyping robots and human-robot interaction, a key problem in many HRI studies is the prototyping of scenarios. For example, in the area of developing home companion robots, researchers study the use of robots for different types of assistance, physical, cognitive and social assistance. This may include helping elderly users at home with physical tasks (e.g. fetch-and-carry), reminding users of appointment, events, or the need to take medicine (the robot as a cognitive prosthetic), or social tasks (encouraging people to socialize, e.g. call a friend or family member or visit a neighbor). Implementing such scenarios presents again a huge developmental effort, in particular when the robot's behaviour should be autonomous, and not fully scripted, but adapt to users' individual preferences and their daily life schedule. One way to prototype a scenario is to combine a WoZ method with robotic theatre performance in front of an audience. The Theatre-based HRI method (THRI) has provided valuable feedback into users' perception of scenarios involving e.g. home companion robots (Syrdal et al, 2011; Chatley et al., 2010). Theatre and drama has been used in Human-Computer Interaction to explore issues of the use of future technologies (see e.g. Iacuccui and Kuuti, 2002; Newell et al., 2006). In the context of HRI, THRI consists of a performance of actors on stage interacting with robots that are WoZ controlled, or semi-autonomously controlled. Subsequent discussions with the audience, and/or questionnaires and interviews are then used to study the audience's perception of the scenarios and the displayed technology. Discussions between the audience and the actors on stage (in character) is typically mediated by a facilitator. This method can reach larger audiences than individual HRI studies would provide, and can thus be very useful to prototype scenarios.

Courtesy of Kerstin Dautenhahn. Copyright: CC-Att-SA (Creative Commons Attribution-ShareAlike 3.0 Unported).

Courtesy of Kerstin Dautenhahn. Copyright: CC-Att-SA (Creative Commons Attribution-ShareAlike 3.0 Unported).

Courtesy of Kerstin Dautenhahn. Copyright: CC-Att-SA (Creative Commons Attribution-ShareAlike 3.0 Unported).

Figure 38.9 A-B-C: a) SISHRI methodological approach (Derbinsky et al., 2013) — situated, real-time HRI with a simulated robot to prototype scenarios, b) example of simulation of interaction shown on the tablet used by the participant. On the left, the homepage of the web application developed for rapid scenarios prototyping is shown. This demo version shows three actions that were implemented (Drawer, GoTo and ToDo): Drawer gave the user the possibility of opening and closing the robot´s drawer. GoTo is used to simulate the time that the robot will take to travel from one position to other (picture on the right), ToDo was introduced to expand the functionality of this prototype, the activities relate to the user, rather than the robot and can be logged in the system (e.g. drinking, eating, etc). On the right, the functionality GoTo is represented. In this example, the user can send the robot from the kitchen (current robot position) to any other place the user selects from the list (kitchen, couch, desk, drawer). In the picture, the user has chosen the kitchen.

Recently, a new resource efficient method for scenario prototyping has been proposed. A proof-of-concept implementation is described in Derbinsky et al. (2013). Here, an individual user, with the help of a handheld device, goes 'through the motions' of robot home assistance scenarios without an actual physical robot. The tablet computer simulates the robot's actions as embedded in a smart environment. The advantage of this method is that the situatedness of the interaction has been maintained, i.e. the user interacts in a real environment, in real time, with a simulated robot. This method, which can be termed SISHRI (Situated Interactive Simulated HRI) maintains the temporal and spatial aspects and the logical order of action sequences in the scenario, but omits the robot. It allows testing of acceptability and general user experience of complex scenarios, e.g. scenarios used for home assistance without requiring a robot. The system responds based on activities recognized via the sensor network and the input from the user via the user interface. The method is likely to be most useful to prototype complex scenarios before an advanced working prototype is available (see Fig. 9).

Courtesy of Kerstin Dautenhahn. Copyright: CC-Att-SA (Creative Commons Attribution-ShareAlike 3.0 Unported).

Courtesy of Kerstin Dautenhahn. Copyright: CC-Att-SA (Creative Commons Attribution-ShareAlike 3.0 Unported).

Courtesy of Kerstin Dautenhahn. Copyright: CC-Att-SA (Creative Commons Attribution-ShareAlike 3.0 Unported).

Courtesy of Kerstin Dautenhahn. Copyright: CC-Att-SA (Creative Commons Attribution-ShareAlike 3.0 Unported).

Courtesy of Kerstin Dautenhahn. Copyright: CC-Att-SA (Creative Commons Attribution-ShareAlike 3.0 Unported).

Courtesy of Kerstin Dautenhahn. Copyright: CC-Att-SA (Creative Commons Attribution-ShareAlike 3.0 Unported).

Figure 38.10: Illustrating the design space of robots. Shape and functionality are dependent on their application and use. a) KASPAR, the minimally expressive robot developed at University of Hertfordshire used for HRI studies including robot-assisted therapy for children with autism, b) Roomba (iRobot), a vaccuming cleaning robot shown operating in the University of Hertfordshire Robot House, c) Autom, the weight loss coach. Credit: Intuitive Automata, d) Pleo robot (Ugobe), designed as a 'care-receiving' robot encouraging people to develop a relationship with it e) Robosapien toy robot (WowWee), f) Design space — niche space — resources, see main text for explanation.

The development of any particular HRI study and the methodologies used need to consider the three key constraints shown in Fig. 10. The Robot Design Space comprises all the different possible designs in terms of robot behaviour and appearance. The Niche Space consists of the requirements for the robot and the human-robot interaction as relevant particular scenarios and application areas. The resources (in terms of time, funding, availability of participants etc.) need to be considered when selecting any particular method for HRI studies. Exhaustively exploring the design space is infeasible, so decisions need to be made carefully.

38.4 HRI - About (not) romanticizing robots

The present reality of robotics research is that robots are far from showing any truly human-like abilities in terms of physical activities, cognition or social abilities (in terms of flexibility, "scaling up" of abilities, "common sense", graceful degradation of competencies etc.). Nevertheless, in the robotics and HRI literature they are often portrayed as "friends", "partners", "co-workers", etc., all of which are genuinely human terms. These terms are rarely used in an operational sense, and few definitions exist—most often these terms are used without further reflection. Previously, I proposed a more formal definition of companion robots, i.e."A robot companion in a home environment needs to 'do the right things', i.e. it has to be useful and perform tasks around the house, but it also has to 'do the things right', i.e. in a manner that is believable and acceptable to humans" (Dautenhahn, 2007a, p. 683).

In contrast to the companion paradigm, where the robot's key function is to take care of the human's needs, in the caretaker paradigm it is the person's duty to take care of the 'immature' robot. In that same article I also argued that due to evolutionarily determined cognitive limits we may be constrained in how many "friends" we may make. When humans form relationships with people, this entails emotional, psychological and physiological investment. We would tend to make a similar investment towards robots, which do not reciprocate this investment. A robot will 'care' about us as much or as little as the programmers want it to. Robots are not people; they are machines. Biological organisms, but not robots, are sentient beings, they are alive, they have an evolutionary and developmental history, they have life-experiences that are shaping their behaviour and their relationships with the environment. In contrast, machines are neither alive nor sentient; they can express emotions , pretend to 'bond' with you, but these are simulations, not the real experiences that humans share. The 'emotions' of a humanoid robot may look human-like but the robot does not feel anything, and the expressions are not based on any experiential understanding. A humanoid robot which looks deeply into your eyes and mutters "I love you" — is running a programme. We may enjoy this interaction, in the way we enjoy role play or immersing ourselves in imaginary worlds, but one needs to be clear about the inherently mechanical nature of the interaction. As Sherry Turkle has pointed out, robots as 'relational artifacts' that are designed to encourage people to develop a relationship with them, can lead to misunderstandings concerning the authenticity of the interaction (Turkle, 2007). If children grow up with a robot companion as their main friend who the interact with for several hours each day, they will learn that they can just switch it off or lock it into a cupboard whenever it is annoying or challenging them. What concept of friendship will these children develop? Will they develop separate categories, e.g. 'friendship with a robot', 'friendship with pets' and 'friendship with people'? Will they apply the same moral and ethical concerns to robots, animals and people? Or will their notion of friendship, shaped by interactions with robots, spill over to the biological world? Similar issues are discussed in terms of children's possible addiction to computer games and game characters and to what extent these may have a negative impact on their social and moral development. Will people who grow up with a social robot view it as a 'different kind', regardless of its human or animal likeness? Will social robots become new ontological categories (cf. Kahn et al. 2004; Melson et al. 2009)? At present such questions cannot be answered, they will require long-term studies into how people interact with robots, over years or decades — and such results are difficult to obtain and may be ethically undesirable. However, robotic pets for children and robotic assistants for adults are becoming more and more widespread, so we may get answers to these questions in the future. The answers are unlikely to be 'black and white' — similar to the question of whether computer games are beneficial for children's cognitive, academic and social development, where answers are inconclusive (Griffiths, 2002; Kierkegaard, 2008; Dye et al., 2009; Anderson et al., 2010; Jackson et al. 2011).

Humans have been fascinated by autonomous machines throughout history, so the fascination with robots, what they are and what they can be, will stay with us for a long time to come. However, it is advisable to have the discussion on the nature of robots based on facts and evidence, and informed predictions, rather than pursuing a romanticizing fiction.

38.5 HRI - there is no such thing as 'natural interaction'

A widespread assumption within the field of HRI is that 'good' interaction with a robot must reflect natural (human-human) interaction and communication as closely as possible in order to ease people's need to interpret the robot's behaviour. Indeed, people's face-to-face interactions are highly dynamic and multi-modal — involving a variety of gestures, language (content as well as prosody are important), body posture, facial expressions, eye gaze, in some contexts tactile interactions, etc. This has lead to intensive research into how robots can produce and understand gestures, how they can understand when being spoken to and respond correspondingly, how robots can use body posture, eye gaze and other cues to regulate the interaction, and cognitive architectures are being developed to provide robots with natural social behaviour and communicative skills (e.g. Yamaoka et al. 2007; Shimada and Kanda, 2012; Salem, 2012; Mutlu et al., 2012). The ultimate goal inherent in such work is to create human-like robots, which look human-like and behave in a human-like manner. While we discuss below in more detail that the goal of human-like robots needs to be reflected upon critically, the fundamental assumption of the existence of 'natural' human behaviour is also problematic. What is natural behaviour to begin with? Is a person behaving naturally in his own home, when playing with his children, talking to his parents, going to a job interview, meeting colleagues, giving a presentation at a conference? The same person behaves differently in different contexts and at different times during their lifetime. Were our hunter-gatherer ancestors behaving naturally when trying to avoid big predators and finding shelter? If 'natural' is meant to be 'biologically realistic' then the argument makes sense — a 'natural gesture' would then be a gesture using a biological motion profile and an arm that is faithfully modeling human arm morphology. Similarly, a natural smile would then try to emulate the complexity of human facial muscles and emotional expressions. However, when moving up from the level of movements and actions to social behaviour, the term 'natural' is less meaningful. To give an example, how polite shall a robot be? Humans show different behaviour and use different expressions in situations where we attend a formal work dinner, or are having a family dinner at home. As humans, we may have many different personal and professional roles in life, e.g. daughter/son, sibling, grandmother, uncle, spouse, employee, employer, committee member, volunteer, etc. We will behave slightly differently in all these different circumstances, from the way we dress, speak, behave, what we say and how we say it, it influences our style of interaction, the manner we use tactile interaction, etc. We can seamlessly switch between these different roles, which are just different aspects of 'who we are' — as expressions of our self or our 'centre of narrative gravity' as it has been phrased by Daniel Dennett. People can deal with such different situations since we continuously re-construct the narratives of our (social) world (Dennett, 1989/91; see also Turner, 1996).

"Our fundamental tactic of self-protection, self-control, and self-definition is not building dams or spinning webs, but telling stories - and more particularly concocting and controlling the story we tell others - and ourselves - about who we are.

These strings or streams of narrative issue forth as if from a single source - not just in the obvious physical sense of flowing from just one mouth, or one pencil or pen, but in a more subtle sense: their effect on any audience or readers is to encourage them to (try to) posit a unified agent whose words they are, about whom they are: in short, to posit what I call a center of narrative gravity (Dennett, 1989/91)."

Thus, for humans, behaving 'naturally' is more than having a given or learnt behaviour repertoire and making rational decisions in any one situation on how to behave. We are 'creating' these behaviours, reconstructing them, taking into consideration the particular context, interaction histories, etc., we are creating behaviour consistent with our 'narrative self'. For humans, such behaviour can be called 'natural'.

What is 'natural' behaviour for robots? Where is the notion of 'self', their 'centre of narrative gravity'? Today's robots are machines, they may have complex 'experiences' but these experiences are no different from those of other complex machines. We can program them to behave differently in different contexts, but from their perspective, it does not make any difference whether they behave one way or the other. They are typically not able to relate perceptions of themselves and their environment to a narrative core, they are not re-creating, but rather recalling, experience. Robots do not have a genuine evolutionary history, their bodies and their behaviour (including gestures etc.) have not evolved over many years as an adaptive response to challenges in the environment. For example, the shape of our human arms and hands has very good 'reasons', it goes back to the design of forelimbs of our vertebrate ancestors, used first for swimming, then as tetrapods for walking and climbing, later bipedal postures freed the hands to grasp and manipulate objects, to use tools, or to communicate via gestures. The design of our arms and hands is not accidental, and is not 'perfect' either. But our arms and hands embody an evolutionary history of adaptation to different environmental constraints. In contrast, there is no 'natural gesture' for a robot, in the same way as there is no 'natural' face or arm for a robot.

To conclude, there appears to be little argument to state that a particular behaviour X is natural for a robot Y. Any behaviour of a robot will be natural or artificial, solely depending on how the humans interacting with the robot perceive it. Thus, naturalness of robot behaviour is in the eyes of the beholder, i.e. the human interacting with or watching the robot; it is not a property of the robot's behaviour itself.

38.6 HRI - new roles

While more and robotic systems can be used in 'the wild' (Sabanovic et al., 2006; Salter et al., 2010) researchers have discussed different roles for such robots.

Previously,I proposed different roles of robots in human society (Dautenhahn, 2003), including:

  • a machine operating without human contact;
  • a tool in the hands of a human operator;
  • a peer as a member of a human—inhabited environment;
  • - a robot as a persuasive machine influencing people's views and/or behaviour (e.g. in a therapeutic context);
  • a robot as a social mediator mediating interactions between people;
  • a robot as a model social actor.

Dautenhahn et al.(2005) investigated people's opinions on viewing robots as friends, assistants or butlers. Others have discussed similar roles of robots and humans,e.g. humans can assume the role of a supervisor, an operator, a mechanic, a peer, or a bystander (Scholtz, 2003). Goodrich and Schultz (2007) have proposed roles for a robot as a mentor for humans or information consumer whereby a human uses information provided by a robot. Other roles that have been discussed recently are robots as team member in collaborative tasks (Breazeal et al. 2004), robots as learners (Thomaz and Breazeal, 2008; Calignon et al., 2010; Lohan et al., 2011), and robots as cross-trainers in HRI teaching contexts (Nikolaidis & Shal, 2013). Teaching robots movements, skills and language in interaction and/or by demonstration is a very active area of research (e.g. Argall et al. 2009; Thomaz and Cakmak 2009; Konidaris et al., 2012; Lyon et al. 2012; Nehaniv et al., 2013), however, it remains a challenge on how to teach in natural, unstructured and highly dynamic environments. For humans and some other biological species social learning is a powerful tool for learning about the world and each other, to teach and develop culture, and it remains a very interesting challenge for future generations of robots learning in human-inhabited environment.(Nehaniv & Dautenhahn, 2007). Ultimately, robots that can learn flexibly, efficiently, and socially appropriate behaviours that enhance its own skills and performance and is acceptable for humans interacting with the robot, will have to develop suitable levels of social intelligence (Dautenhahn, 1994, 1995, 2007a).

38.7 Robots as Service Providers

A lot of research in intelligent, autonomous robots has focused on how the robots could provide services (assistive or otherwise) that originally people performed. Robots replaced many workers at the factory assembly lines, and more recently robots have been discussed e.g. in the context of providing solutions to care for elderly people in countries with rapidly changing demographics (see Fig. 11). In many scenarios, robots are meant to work alongside people, and to replace some tasks that previously humans performed.

Recently, a number of projects worldwide investigate the use of robots in elder-care in order to allow users to live independently in their homes for as long as possible see e.g. Heylen et al. (2012), Huijnen et al. (2011). Such research poses many technological, ethical and user-related challenges, for examples of such research projects see Fig. 6 for HRI research on home companions in the COGNIRON project (2004-2008), Fig. 12 for research in the LIREC project (2008-2012), and Fig. 1 for social and empathic home assistance in a smart home as part of the above mentioned ACCOMPANY project. Many such projects use a smart home environment, e.g. the University of Hertfordshire Robot House which is equipped with dozens of sensors. Success in this research domain will depend on acceptability, not only by the primary users of such systems (elderly people) but also by other users (family, friends, neighbours) including formal and informal carers. Thus, taking into consideration the 'human component' is important for such projects. See Amirabdollahian et al. (2013) for a more detailed discussion of the objectives and approaches taken in the ACCOMPANY project.

Population projections for the 27 Member States, showing an increase of people aged 65 and above from 17.57% to 29.54%, with a decrease of people aged between 15-64 from 67.01% to 57.42%. Diagram take

Courtesy of Kerstin Dautenhahn. Copyright: CC-Att-SA (Creative Commons Attribution-ShareAlike 3.0 Unported).

Figure 38.11: Population projections for the 27 Member States, showing an increase of people aged 65 and above from 17.57% to 29.54%, with a decrease of people aged between 15-64 from 67.01% to 57.42%. Diagram taken with permission from Amirabdollahian et al. (2013).

Note, the domain of robots for elder-care poses many ethical challenges (see e.g. Sharkey and Sharkey, 2011, 2012), and the investigation of these issues is indeed one of the aims of the ACCOMPANY project. In the following I like to provide some personal thoughts on some of these matters. Often robots are envisaged as providing company and social contact, stimulation, motivation, and also facilitating communication among e.g. residents in a care home; see many years of studies with the seal robot PARO (Wada and Shibata, 2007; Shibata et al. 2012). Indeed, care staff has often very little time (typically in the range of a few minutes per day per person), for social contact. So care providers may show a great interest in using robots for social company, and elderly people might welcome such robots as a means to combat their loneliness. However, as I have argued above, interactions with robots are inherently mechanical in nature; robots do not reciprocate love and affection, they can only simulate those. Thus, human beings are and will remain the best experts on providing social contact and company, experiencing and expressing empathy, affection, and mutual understanding. While it is difficult to design robots that can do the more practical tasks that dominate the work day of care staff, e.g. cleaning, feeding, washing elderly people, robots may be designed to fulfill those tasks, potentially freeing up care staff to provide social contact with genuine, meaningful interactions. Unfortunately, it is technically highly challenging to build robots that can actually provide such tasks, although it is an active area of research (cf. the RI-MAN robot and Yamazaki et al., 2012), while it is well within our reach to build robots that provide some basic version of company and social interaction, 'relational artifacts' according to Turkle et al. (2006), that already exist today. If one day robots are able to provide both social and non-social aspects of care, will human care staff become obsolete due to the need of cutting costs in elder-care? Or will robots be used to do the routine work and the time of human carers will be freed to engage with elderly residents in meaningful and emotionally satisfying ways? The latter option would not only be more successful in providing efficient and at the same time humane care, it would also acknowledge our biological roots, emotional needs, and evolutionary history—as a species, our social skills are the one domain where we typically possess our greatest expertise, while our 'technical/mechanical' expertise can be replaced more easily by machines.

Courtesy of Kerstin Dautenhahn. Copyright: CC-Att-SA (Creative Commons Attribution-ShareAlike 3.0 Unported).

Courtesy of Kerstin Dautenhahn. Copyright: CC-Att-SA (Creative Commons Attribution-ShareAlike 3.0 Unported).

Courtesy of Kerstin Dautenhahn. Copyright: CC-Att-SA (Creative Commons Attribution-ShareAlike 3.0 Unported).

Courtesy of Kerstin Dautenhahn. Copyright: CC-Att-SA (Creative Commons Attribution-ShareAlike 3.0 Unported).

Courtesy of Kerstin Dautenhahn. Copyright: CC-Att-SA (Creative Commons Attribution-ShareAlike 3.0 Unported).

Figure 38.12: The Sunflower robot developed by Dr. Kheng Lee Koay at the University of Hertfordshire. Based on a Pioneer mobile platform (left), a socially interactive and expressive robot was developed for the study of assistance scenarios for a robot companion in a home context.

An example of a robot designed specifically for home assistance is the Sunflower robot illustrated in Figure 12. It consists of a mobile base, a touch-screen user interface and diffuse LED display panels to provide expressive multi-coloured light signals to the user. Other expressive behaviours include sound, base movement, and movements of the robot's neck. The non-verbal expressive behaviours have been inspired by expressive behaviour that dogs display in human-dog interaction in similar scenarios as those used in the Robot House, in collaboration with ELTE in Hungary (Prof. Ádám Miklósi's group). The robot possesses some human-like features (a head, arms) but its overall design is non-humanoid. This design follows our previous research results showing that mechanoid (mechanically-looking) robots are well accepted by users with different individual preferences. The robot's expressive behaviour (light, sound, movements) has been inspired by how dogs interact with their owners (Syrdal et al., 2010; Koay et al., 2013). a) early Sunflower prototype, b,c) Sunflower, d) HRI home assistance scenarios with an early Sunflower prototype in comparison to dog-owner interaction in a comparable scenario, e) (Syrdal et al., 2010). For different expressions of Sunflower see (the picture gallery) and (a video).

38.8 Robots as Social Mediators

Above we discussed the role of robots as service providers, companions and 'helpers'. A complementary view of robots is to consider their role as social mediators — machines that help people to connect with each other. Such robots are not meant to replace or complement humans and their work; instead, their key role is helping people to engage with others. One area where robotic social mediators have been investigated is the domain of robot-assisted therapy (RAT) for children with autism.

Autism is a lifelong developmental disorder characterized by impairments in communication, social interaction and imagination and fantasy (often referred to as the triad of impairments; Wing, 1996) as well as restricted interests and stereotypical behaviours. Autism is a spectrum disorder and we find large individual differences in how autism may manifest itself in a particular child (for diagnostic criteria see DSM IV, 2000). The exact causes of autism are still under investigation, and at present no cure exists. A variety of therapeutic approaches exist, and using robots or other computer technology could complement these existing approaches. The prevalence rate for autism spectrum disorders is often reported as around 1 in 100 but statistical data vary.

While in 1979 Weir and Emanuel had encouraging results with one child with autism using a button box to control a LOGO Turtle from a distance, the use of interactive, social robots as therapeutic tools was first introduced by the present author (Dautenhahn (1999)) as part of the Aurora project (1998, ongoing). Very early in this work the concept of a social mediator for children with autism was investigated, with the aim to encourage interaction between children with autism and other people. The use of robots for therapeutic or diagnostic applications has rapidly grown over the past few years, see recent review articles which show the breadth of this research field and the number of active research groups (Diehl et al., 2012, Scassellati et al. 2012), compared to an earlier review (Dautenhahn & Werry, 2004).

In the earliest work of robots as social mediators for children with autism, Werry et al. (2001) and the present author (Dautenhahn 2003) gave examples of trials with pairs of children who started interacting with each other in a scenario where they had to share an autonomous, mobile robot that they could play with. Work with the humanoid robot Robota (Billard et al. 2006) later showed that the robot could encourage children with autism to interact with each other, as well as a co-present experimenter (Robins et al. 2004; Robins et al. 2005a). Note, the role of a robotic social mediator is not to replace, but to facilitate human contact (Robins et al., 2005a,b, 2006). Similarly, recent work with the minimally expressive humanoid robot KASPAR discusses the robot's role as a salient object that mediates and encourages interaction between the children and co-present adults (Robins et al, 2009; Iacono et al., 2011). Figures 13 to 16 give examples of trials conducted by Dr. Ben Robins where robots have been used as social mediators.

A key future challenge of robots as social mediators is to investigate how robots can adapt in real-time to different users. Francois et al. (2009) provide a proof-of-concept study showing how an AIBO robot can adapt to different interaction styles of children with autism playing with it, see also a recent article by Bekele et al., (2013).

Courtesy of Kerstin Dautenhahn. Copyright: CC-Att-SA (Creative Commons Attribution-ShareAlike 3.0 Unported).

Courtesy of Kerstin Dautenhahn. Copyright: CC-Att-SA (Creative Commons Attribution-ShareAlike 3.0 Unported).

Figure 38.13 A-B: KASPAR as a social mediator for children with autism. Two boys playing an imitation game, one child controls the robot's expressions, the other child has to imitate KASPAR, then the children switch roles.

Courtesy of Kerstin Dautenhahn. Copyright: CC-Att-SA (Creative Commons Attribution-ShareAlike 3.0 Unported).

Courtesy of Kerstin Dautenhahn. Copyright: CC-Att-SA (Creative Commons Attribution-ShareAlike 3.0 Unported).

Courtesy of Kerstin Dautenhahn. Copyright: CC-Att-SA (Creative Commons Attribution-ShareAlike 3.0 Unported).

Courtesy of Kerstin Dautenhahn. Copyright: CC-Att-SA (Creative Commons Attribution-ShareAlike 3.0 Unported).

Courtesy of Kerstin Dautenhahn. Copyright: CC-Att-SA (Creative Commons Attribution-ShareAlike 3.0 Unported).

Courtesy of Kerstin Dautenhahn. Copyright: CC-Att-SA (Creative Commons Attribution-ShareAlike 3.0 Unported).

Figure 38.14: Two children with autism enjoying the imitation game with KASPAR. One child uses a remote control to make KASPAR produce gestures and body postures; the role of the second child is to imitate KASPAR. After a while the roles are switched.

Courtesy of Kerstin Dautenhahn. Copyright: CC-Att-SA (Creative Commons Attribution-ShareAlike 3.0 Unported).

Courtesy of Kerstin Dautenhahn. Copyright: CC-Att-SA (Creative Commons Attribution-ShareAlike 3.0 Unported).

Figure 38.15 A-B: Sharing with another person (an adult on the left, another child on the right) while playing games with KASPAR.

Courtesy of Kerstin Dautenhahn. Copyright: CC-Att-SA (Creative Commons Attribution-ShareAlike 3.0 Unported).

Courtesy of Kerstin Dautenhahn. Copyright: CC-Att-SA (Creative Commons Attribution-ShareAlike 3.0 Unported).

Courtesy of Kerstin Dautenhahn. Copyright: CC-Att-SA (Creative Commons Attribution-ShareAlike 3.0 Unported).

Courtesy of Kerstin Dautenhahn. Copyright: CC-Att-SA (Creative Commons Attribution-ShareAlike 3.0 Unported).

Figure 38.16 A-B-C-D: Two children with autism enjoying a collaborative game with Robota. The robot is remotely controlled by the experimenter. Robota will only move and adopt a certain posture if both children simultaneously adopt this posture. Findings showed that this provided a strong incentive for the children to coordinate their movements.

The second example area for robots being used as social mediators concerns remote human-human interaction.

While in robotics research touch sensors have been used widely e.g. allowing robots to avoid collisions or to pick up objects, the social dimension of human-robot touch has only recently attracted attention. Humans are born social and tactile creatures. Seeking out contact with the world, including the social world, is key to learning about oneself, the environment, others, and relationships we have with the world. Through tactile interaction we develop cognitive, social and emotional skills, and attachment with others. Tactile interaction is the most basic form of how humans communicate with each other (Hertenstein, 2006). Studies have shown the devastating effects that deprivation of touch in early childhood can have (e.g. Davis, 1999).

Social robots are usually equipped with tactile sensors, in order to encourage play and allow the robot to respond to human touch, e.g. AIBO (Sony), Pleo (Ugobe), PARO (Shibata et al., 2012). Using tactile HRI to support human-human communication over distance illustrates the role a robot could play in order to mediate human contact (Mueller et al., 2005; Lee at al. 2008; The et al., 2008; Papadopoulos et al. 2012a,b).

To illustrate this research direction, Fotios Papadopoulos has investigated how autonomous AIBO robots (Sony) could mediate distant communication between two people engaging in online game activities and interaction scenarios. Here, the long-term goal is to develop robots as social mediators that can assist human-human communication in remote interaction scenarios, in order to support, for example, friends and family members who are temporarily or long term prevented from face-to-face interaction. One study compared how people communicate with each other through a communication system named AiBone involving video communication and interaction with and through an AIBO robot with a setting not involving any robots and using standard computer interfaces instead (Papadopoulos et al., 2012). The experiment involved twenty pairs of participants who communicated using video conference software. Findings showed that participants expressed more social cues when using the robot, and shared more of their game experiences with each other. However, results also show that in terms of efficiency of how to perform the tasks (navigating a maze), users performed better without the robot. These results show a careful balance and trade-off between efficiency of interaction and communication modes, and their social relevance in terms of mediating human-human contact and supporting relationships. A second experiment used a less competitive collaborative game called AIBOStory. Using the remote interactive story-telling system participants could collaboratively create and share common stories through an integrated, autonomous robot companion acting as a social mediator between two remotely located people. Following an initial pilot study, the main experiment studied long-term interactions of 10 pairs of participants using AIBOStory. Results were compared with a condition not involving any physical robot. Results suggests user preferences towards the robot mode, thus supporting the notion that physical robots in the role of social mediators, affording touch-based human-robot interaction and embedded in a remote human-human communication scenario, may improve communication and interaction between people (Papadopoulos, 2012b).

The use of robots as social mediators is different from the approach of considering robots as 'permanent' tools or companions — a mediator is no longer needed once mediation has been successful. For example, a child who has learnt all it can learn from a robotic mediator will no longer need the robot; a couple being separated for a few months will not need remote communication technology any more once they are reunited. Thus, the ultimate goal of a robotic mediator would be to disappear eventually, after the 'job' has been done.

38.9 HRI - there is a place for non-humanoid robots

It is often assumed as 'given' (e.g. not reflected upon) that the ultimate goal for designers of robots for human-inhabited environments is to develop humanoid robots, i.e. robots with a human-like shape, 2 legs, 2 arms, a head, social behaviour and communication abilities similar to human beings. Different arguments are often provided, some technical, others non-technical:

  • humanoid robots would be able to operate machines and work in environments that originally were designed for humans, e.g. the humanoid robot would be able to open our washing machine and use our tool box. This would be in contrasted to robots that require a pre-engineered environment.
  • in many applications robots are meant to be used in tasks that require human-like body shapes, e.g. arms to manipulate objects, legs to walk over uneven terrain etc.
  • the assumption that humanoid robots would have greater acceptability by people, that they mind 'blend in' better, that people would prefer to interact with them. It is argued that people would be able to more easily predict and respond to the robot's behaviour due to its familiarity with human motion and behaviour, and predictability may contribute to safety.
  • the assumption that those robots would fulfill better human-like tasks, e.g. operating machinery and functioning in an environment designed for people, or for the purpose of a robot carrying out human-like tasks, e.g. a companion robots assisting people in their homes or in a hospital our care home

Likewise, in the domain of life-like agents, e.g. virtual characters, a similar tendency towards human-like agents can be found. Previously, I described this tendency as the 'life-like agent hypothesis' (Dautenhahn, 1999):

"Artificial social agents (robotic or software) which are supposed to interact with humans are most successfully designed by imitating life, i.e. making the agents mimic as closely as possible animals, in particular humans. This comprises both 'shallow' approaches focusing on the presentation and believability of the agents, as well as 'deep' architectures which attempt to model faithfully animal cognition and intelligence. Such life-like agents are desirable since

  1. The agents are supposed to act on behalf of or in collaboration with humans; they adopt roles and fulfill tasks normally done by humans, thus they require human forms of (social) intelligence.
  2. Users prefer to interact ideally with other humans and less ideally with human-like agents. Thus, life-like agents can naturally be integrated in human work and entertainment environment, e.g. as assistants or pets.
  3. Life-like agents can serve as models for the scientific investigation of animal behaviour and animal minds".
    (Dautenhahn, 1999)

Argument (3) presented above easily translates to robotic agent and companions, since these may be used to study human and animal behaviour, cognition and development (MacDorman and Ishiguro, 2006). Clearly, the humanoid robot is an exciting area of research, not only for those researchers interested in the technological aspects but also, importantly, for those interested in developing robots with human-like cognition; the goal would be to develop advanced robots, or to use the robots as tools for the study of human cognition and development (cf. the iCub which exemplifies this work, e.g. Metta et al., 2010; Lyon et al. 2012). When trying to achieve human-like cognition, it is best to choose a humanoid platform, due to the constraints and interdependencies of animal minds and bodies (Pfeifer, 2007). Precursors of this work can be found in Adaptive Behaviour and Artificial Life research using robots as models to understand biological systems (e.g. Webb, 2001; Ijspeert et al., 2005).

However, arguments (1) and (2) are problematic, for the following reasons:

Firstly, while humans have a natural tendency to anthropomorphize the world and to engage even with non-animate objects (such as robots) in a social manner (e.g. Reeves and Nass, 1996; Duffy, 2003), a humanoid shape often evokes expectations concerning the robot's ability, e.g. human-like hands and fingers suggest that the robot is able to manipulate objects in the same way humans can, a head with eyes suggests that the robot has advanced sensory abilities e.g. vision, a robot that produces speech is expected also to understand when spoken to. More generally, a human-like form and human-like behaviour is associated with human-level intelligence and general knowledge, as well as human-like social, communicative and empathic understanding. Due to limitations both in robotics technology and in our understanding of how to create human-like levels of intelligence and cognition, in interaction with a robot people quickly realize the robot's limitations, which can cause frustration and disappointment.

Secondly, if a non-humanoid shape can fulfill the robot's envisaged function, then this may be the most efficient as well as the most acceptable form. For example, the autonomous vacuum cleaning robot Roomba (iRobot) has been well accepted by users as an autonomous, but clearly non-humanoid robot. Some users may attribute personality to it, but the functional shape of the robot clearly signifies its robotic nature, and indeed few owners have been shown to treat the robot as a social being (Sung et al., 2007, 2008). Thus, rather than trying to use a humanoid robot operating a vacuum cleaner in a human-like manner (which is very hard to implement), an alternative efficient and acceptable solution has been found. Similarly, the ironing robot built by Siemens (Dressman) does not try to replicate the way humans iron a shirt but finds an alternative, technologically simpler solution.

Building humanoids which operate and behave in a human-like manner is technologically highly challenging and costly in terms of time and effort required, and it is unclear when such human-likeness may be achieved (if ever)in future. But even if such robots were around, would we want them to replace e.g. the Roomba? The current tendency to focus on humanoid robots in HRI and robotics may be driven by scientific curiosity, but it is advisable to consider the whole design space of robots, and how the robot's design may be very suitable for particular tasks or application areas. Non-humanoid, often special purpose machines, such as the Roomba, may provide cheap and robust solutions to real-life needs, i.e. to get the floor cleaned, in particular for tasks that involve little human contact. For tasks that do involve a significant amount of human-robot interaction, some humanoid characteristics may add to the robot's acceptance and success as an interactive machine, and may thus be justified better. Note, the design space of robots is huge, and 'humanoid' does not necessarily mean 'as closely as possible resembling a human'. A humanoid robot such as Autom (2013), designed as a weight loss coach has clearly human-like features, but very simplified features, more reminiscent of a cartoon-design. On the other end of the spectrum towards human-like appearance we find the androids developed by Hiroshi Ishiguro and his team (http://www.geminoid.jp/en/index.html), or David Hanson's robots (2013). However, in android technology the limitations are clearly visible in terms of producing human-like motor control, cognition and interactive skills. Androids have been proposed, though, as tools to investigate human cognition (MacDorman and Ishiguro, 2006).

Thus, social robots do not necessarily need to 'be like us'; they do not need to behave or look like us, but they need to do their jobs well, integrate into our human culture and provide an acceptable, enjoyable and safe interaction experience.

38.10 HRI - Being safe

Human safety is a key requirement for robots to perform useful tasks alongside humans in a home environment, an office, etc. In such circumstances, the widely used solutions towards robot safety in industry (e.g. BARA, 2012) are not acceptable (e.g. warning sounds, flashing lights, etc.) or they may not be feasible in the particular environment (e.g. use of enclosures, safety guards etc.). Safety in human-robot interaction in its most basic form shall avoid any physical harm to a human being due to collisions with a robot or part of a robot etc. New developments on the robot's technical features (e.g. reliability, control, sensors) and materials (soft, lightweight etc.) can contribute to human-robot safety (Pervez & Ryu, 2008). In situations where physical human-interaction is involved different strategies can be adopted and metrics developed; compare a review in (De Santis et al.,2008) that identifies different approaches for human-robot safety ranging from design, sensors, software, planning, biomimetics to control solutions to human-robot safety. Research in this domain concerns many different aspects, e.g. the analysis and design of safety aspects, the design of safety for robots via the development of specific mechanical and actuator systems or by exploiting new materials, design of low and medium-level controllers for safe compliance via direct force compliance, and the development of high-level cognition, control and decision-making aspects (Herrmann and Melhuish, 2010).

However, even non-harmful interactions may not be perceived as comfortable (e.g. a robot invading a user's personal space by approaching too close). Thus, we can consider objective parameters of physical safety, as well as subjective parameters of perceived safety. The latter is likely to change in long-term interactions when a user gets used to interactions with the robot and understands better its functionalities and limitations, which allows the user to make better predictions about the robot's behaviour. Little research has investigated the use of social cues to enhance the safety of human-robot interactions. Research on safety in human-robot interaction usually focuses on technical requirements for safety, rather than addressing possible human behavioural and social mechanisms. However, humans are able to deal with other people even in potentially dangerous situations (e.g. when on a collision path while walking along a hallway) by utilizing a number of communicative verbal as well as non-verbal coordination mechanisms. There are two main aspects to the use of social cues for enhancing safety with robots: 1) The robot can express social cues and show behaviour which intuitively informs the user that a potentially hazardous action by the robot is imminent or under way. In this case it would be up to the person to take the initiative to modify his/her behaviour to ensure safe interaction with the robot. 2) Alternatively, the robot can actively monitor the user's activities (and/or use information from its interaction history with the user to make predictions about the user's behaviour and activities), and modify its own actions accordingly to avoid unsafe interactions. In the latter case the robot takes the initiative and tries to regulate the interactions with the user in a safe manner. Point 2) above is significantly more technically demanding of robot control and sensor systems, but both approaches have the potential to facilitate safe working of a robot in a human-oriented environment. A combination of both approaches, i.e. human and robot both being 'safety-aware' and collaboratively trying to avoid unsafe situations by mutually being attentive to and adapting to each other's current or predicted actions would be the more 'natural' solution, similar to how people coordinate their actions. However, it would require sophisticated perceptual and predictive abilities of the robot, in dynamic and naturalistic environments with complex tasks.

Note, humanoid robots are not necessarily safer than other robots as implied in the following statement:

"They can move around our buildings, they can increasingly use the same tools as us, and perhaps most importantly they have the potential to move in a way that naturally makes sense to us - which makes them safer to be around." (http://www.therobotstudio.com/humanoid-robots.html)

While the above statement may appear intuitive to non-roboticists, a human-like shape does not necessarily help in predicting the behaviour of a robot. When encountering a human we can make fairly good predictions of their maximum speed or strength, even when meeting an athlete. If we make the same predications for human-like robots we may be fundamentally wrong, and engage in behaviour that may result e.g. in injuries for people. Underestimating the weight of a robot or the behaviour of an industrial-strength manipulator arm is clearly not safe, regardless of how human-like they may appear. Thus, safe human-robot interaction needs to be studied carefully. In many cases a non-human like machine, which people have little prior expectations of, will make people act in an instinctively cautious manner around the machine, similar to the caution people apply when encountering unknown and potentially dangerous situations. Thus, for 'first encounters', or application areas where people will meet a particular robot only briefly, non-humanoid machines may have advantages over humanoid robots. Non-humanoid robots decrease the expectations in terms of the skills people attribute to them, and they may elicit cautious behaviour in people who will carefully assess the robot's abilities and how one can safely interact with it, rather than assuming that it 'naturally' has human-like abilities and is safe to interact with.

38.11 Conclusion. HRI - what robots are today

Social robots are a special kind of (embodied) interactive artifact (see Kahn et al., 2004; Melson et al., 2009) that may afford new types of interactions with people, and new roles they may adopt in society may emerge (see Dautenhahn, 2003). People's relationships with such robots will cover a range from "funny toy" to "long-term companion". Future robots may look and behave very differently from how they do today, and we might develop relationships with them and invent usages for them that we cannot envisage at present. Human culture is changing, too, and people's attitudes towards social robots is likely to change the more prevalent and complex robots become. Elder-care robots that are currently under investigation will probably only be mass deployed when today's young people have reached retirement age—a generation used to electronic devices, the internet and World-Wide-Web, gadgets and social networking on an unprecedented scale. They won't be 'naive users'. But even today's participants in HRI studies are not "naive" in a strict sense—they come with particular attitudes towards technology in general and often robots in particular, even when they have never encountered one face-to-face. People tend to anthropomorphize the world around them, and they react socially even to non-humanoid-looking technology. People are also social animals, and they interpret and interact with the animate and inanimate world around them in social terms (Dautenhahn, 2007). They may respond to robots with some biological reactions typically shown towards humans, but this reaction may be influenced by top-down mechanisms of their beliefs about the system (Shen et al. 2011). Future machines may capitalize on these bottom-up (biological) and top-down (psychological) processes and we may create machines that people may develop special relationships with. HRI is a moving target, and so, as HRI researchers, we need to keep moving, too—being flexible and open-minded about the very foundations of our domain and the nature of robots, and being open-minded towards creative solutions to robot design and methodological challenges. Social robots of the future might be different creatures, complex synthetic entities, but they may have unexpected properties and they may even surprise us and make us behave in surprising ways. As a research community we work towards a new science of HRI that can shape these developments for the benefit of us as individuals and our society.

38.12 Acknowledgements

I would like to thank Joe Saunders, Michael L. Walters and Chrystopher Nehaniv for helpful comments on the manuscript. I would also like to thank the excellent research team in the Adaptive Systems research group at the University of Hertfordshire who created some of the research work cited in this article which has greatly shaped and changed my ideas on social robots and human-robot interaction over the past 13 years.

38.13 References

  • ACCOMPANY project. URL: http://accompanyproject.eu/. Last accessed 16 April 2013.
  • F. Amirabdollahian, R. op den Akker, S. Bedaf, R. Bormann, H.r Draper, G. J. Gelderblom, C. Gutierrez Ruiz, D. Hewson, I. Iacono, K. L. Koay, B. Krose, P. Marti, H. Prevot-Huille, U. Reiser, J.Saunders, T. Sorell, K. Dautenhahn (2013) Acceptable robotiCs COMPanions for AgeiNg Years - Multidimensional Aspects of Human-System Interactions. Proc. 6th International Conference on Human System Interaction HSI´2013, Sopot, Poland, 6-8 June 2013.
  • Anderson, C. A., Shibuya, A., Ihori, N., Swing, E. L., Bushman, B.J., Sakamoto, A., Rothstein, H.R., M. Saleem (2010). Violent video game effects on aggression, empathy, and prosocial behavior in Eastern and Western countries. Psychological Bulletin 136: 151-173.
  • B. D. Argall, S. Chernova, M. Veloso, and B. Browning (2009) A Survey of Robot Learning from Demonstration. Robotics and Autonomous Systems 57(5): 469-483.
  • M. Asada, K. Hosoda, Y. Kuniyoshi, H. Ishiguro, T. Inui, Y. Yoshikawa, M. Ogino, C. Yoshida (2009) Cognitive Developmental Robotics: A Survey. IEEE Transactions on Autonomous Mental Development 1(1): 12-34.
  • Aurora project. URL: http://www.aurora-project.com/. Last accessed 16 April 2013.
  • C. Bartneck, H Jun (2004) Rapid Prototyping for Interactive Robots. Proceedings of the 8th Conference on Intelligent Autonomous Systems (IAS-8), Amsterdam, pp. 136-145.
  • E. T. Bekele, U. Lahiri, A. R. Swanson, J. A. Crittendon, Z. E. Warren, N. Sarkar (2013) A Step Towards Developing Adaptive Robot-Mediated Intervention Architecture (ARIA) for Children With Autism. IEEE Transactions on Neural Systems and Rehabilitation Engineering 22(2): 289-299.
  • A. Billard, Robins, B, Dautenhahn, K. and Nadel, J (2006) Building Robota, a Mini-Humanoid Robot for the Rehabilitation of Children with Autism. RESNA Assistive Technology Journal. Vol. 19, Issue 1.
  • C. Breazeal, A. Brooks, J. Gray, G. Hoffman, C. Kidd, H. Lee, J. Lieberman, A. Lockerd, and D. Chilongo (2004) Tutelage and Collaboration for Humanoid Robots. International Journal of Humanoid Robot,1(2): 315―348.
  • BARA (The British Automation and Robot Association). A practical guide to machine safety application, legislation and standards. URL: http://www.bara.org.uk/info/info_safety.html. Accessed December, 2012.
  • S. Calinon, F. Dhalluin, E. Sauser, D. Caldwell, A. Billard (2010) Learning and reproduction of gestures by imitation: An approach based on Hidden Markov Model and Gaussian Mixture Regression. IEEE Robotics and Automation Magazine 17(2): 44-54.
  • A. Cangelosi, G. Metta, G. Sagerer, S. Nolfi, C. L. Nehaniv, K. Fischer, J. Tani, T. Belpaeme, G. Sandini, . Nori, L. Fadiga, B. Wrede, K. Rohlfing, E. Tuci, K. Dautenhahn, J. Saunders, and A. Zeschel (2010) Integration of Action and Language Knowledge: A Roadmap for Developmental Robotics. IEEE Transactions on Autonomous Mental Development 2(3): 167-195.
  • A. R. Chatley, K. Dautenhahn, M. L. Walters, D. S. Syrdal, and Bruce Christianson (2010) Theatre as a Discussion Tool in Human-Robot Interaction Experiments - A Pilot Study. Proceedings The Third International Conference on Advances in Computer-Human Interactions (ACHI 2010), February 10-16, 2010 - St. Maarten, Netherlands Antilles, IEEE Press, pp. 73 - 78.
  • R. Cuijpers, M. Bruna, J. Ham, E. Torta (2011) Attitude towards Robots Depends on Interaction But Not on Anticipatory Behaviour. In B. Mutlu, C. Bartneck, J. Ham, V. Evers & T. Kanda (Eds.), Social Robotics Lecture Notes in Computer Science (Vol. 7072, pp. 163-172): Springer Berlin / Heidelberg.
  • COGNIRON. URL: http://www.cogniron.org/final/Home.php. Last accessed 11 April 2013.
  • N. Dahlbäck, A. Jönsson, L. Ahrenberg (1993) Wizard of Oz studies—why and how. In Proc. First Int. Conf. on Intelligent User Interfaces, Orlando, Florida, USA, pp. 193–200.
  • K. Dautenhahn (1994) Trying to Imitate - a Step Towards Releasing Robots from Social Isolation, Proceedings: From Perception to Action Conference (Lausanne, Switzerland, September 7–9, 1994), editors: P. Gaussier and J.-D. Nicoud, IEEE Computer Society Press, pp 290–301, 1994.
  • K. Dautenhahn (1995) Getting to know each other - artificial social intelligence for autonomous robots. Robotics and Autonomous Systems 16: 333-356.
  • K. Dautenhahn (2003) Roles and Functions of Robots in Human Society - Implications from Research in Autism Therapy. Robotica 21(4): 443-452.
  • K. Dautenhahn, I. Werry (2004) Towards Interactive Robots in Autism Therapy: Background, Motivation and Challenges. Pragmatics and Cognition 12(1): 1-35.
  • K. Dautenhahn (2007a) Socially intelligent robots: dimensions of human - robot interaction. Philosophical Transactions of the Royal Society B: Biological Sciences 362(1480): 679-704.
  • K. Dautenhahn (2007b) Methodology and Themes of Human-Robot Interaction: A Growing Research Field. International Journal of Advanced Robotic Systems 4(1): 103-108
  • K. Dautenhahn, S. Woods, C. Kaouri, M. L. Walters, K. L. Koay, I. Werry (2005) What is a Robot Companion - Friend, Assistant or Butler? Proc. IROS 2005, IEEE IRS/RSJ International Conference on Intelligent Robots and Systems, August 2-6, 2005, Edmonton, Alberta Canada, pp. 1488-1493.
  • K. Dautenhahn (1999) Robots as Social Actors: AURORA and The Case of Autism. Proc. CT99, The Third International Cognitive Technology Conference, August, San Francisco.
  • P. K. Davis (1999) The Power of Touch – The Basis for Survival, Health, Intimacy, and Emotional Well-Being. Carlsbad, CA: Hay House Inc.
  • D. C. Dennett (1989/91). The origins of selves. Cogito, 3, 163-73, Autumn 1989. Reprinted in Daniel Kolak and R. Martin, eds., (1991), Self & Identity: Contemporary Philosophical Issues, Macmillan.
  • N. Derbinsky, W. C. Ho, I. Duque, J. Saunders, K. Dautenhahn (2013) Resource-Efficient Methods for Feasibility Studies of Scenarios for Long-Term HRI Studies, Proc. ACHI 2013, The Sixth International conference on Advances in Computer-Human Interactions, 24 February - 1 March 2013, Nice, France.
  • A. De Santis, B. Siciliano, A. De Luca, A. Bicchi (2008) An atlas of physical human-robot interaction. Mechanisms and Machine Theory 43: 253-270.
  • Dressman, Siemens. URL: http://www.youtube.com/watch?v=2CxWEMScmPE. Youtube video last accessed 18 April 2013.
  • P. Dickerson, B. Robins, K. Dautenhahn (2013) Where the action is: A conversation analytic perspective on interaction between a humanoid robot, a co-present adult and a child with an ASD. Interaction Studies 14(2), special issue on Asymmetry and adaptation in social interaction: A micro-analytic perspective, Eds. I. Nomikou, K. Pitsch, K. K. Rohlfing.
  • J. J. Diehl, L. M. Schmitt, M. Villano, C. R. Crowell (2012) The clinical use of robots for individuals with Autism Spectrum Disorders: A critical review. Research in Autism Spectrum Disorders 6: 249-262.
  • B. R. Duffy (2003) Anthropomorphism and the social robot. Robotics and Autonomous Systems 42: 177-190.
  • DSM-IV-TR (2000) The Diagnostic and Statistical Manual of Mental Disorders, American Psychiatric Association.
  • M. W. G. Dye, C. S. Green, D. Bavelier (2009) Increasing speed of processing with action video games. Current Directions in Psychological Science 18: 321-326.
  • D. François, D. Polani, K. Dautenhahn (2008) Towards Socially Adaptive Robots: A Novel Method for Real Time Recognition of Human-Robot Interaction Styles. Proc. Humanoids 2008, December 1-3, 2008, Daejeon, Korea, pp. 353-359.
  • M. A. Goodrich and A. C. Schultz (2007) Human-Robot Interaction: A Survey. Foundations and Trends in Human-Computer Interaction 1(3): 203-275
  • J. D. Gould, J. Conti, T. Hovanyecz (1983) Composing letters with a simulated listening typewriter. Communications of the ACM 26: 295–308.
  • A. Green, H. Hüttenrauch, K. Severinson Eklundh (2004) Applying the Wizard of Oz Framework to Cooperative Service discovery and Configuration. Proc. IEEE RO-MAN 2004, IEEE Press, 575-580.
  • M. Griffiths (2002) The educational benefits of video games. Education and Health 20(3): 47-51.
  • Hanson Robotic Inc (2013) URL: http://www.hansonrobotics.com/. Last accessed 16 April 2013.
  • G. M. P. O’Hare, B. R. Duffy, J. F. Bradley, A. N. Martin (2003) Agent Chameleons: Moving Minds from Robots to Digital Information Spaces. Proceedings of Autonomous Minirobots for Research and Edutainment, pp. 18–21.
  • I. Harvey, E. A. Di Paolo, R. Wood, M. Quinn, E. Tuci (2005) Evolutionary Robotics: A New Scientific Tool for Studying Cognition. Artificial Life 11(1-2): 79-98.
  • J. Heinzmann, A. Zelinsky (2003) Quantitative safety guarantees for physical human-robot interaction. The International Journal of Robotics Research 22(7-8): 479-504
  • G. Herrmann, C. Melhuish (2010) Towards safety in human-robot interaction. International Journal of Social robotics 2: 217-219.
  • M. J. Hertenstein, J. M. Verkamp, A. M. Kerestes, R. M. Holmes (2006) The communicative functions of touch in humans, non-human primates, and rats: A review and synthesis of the empirical research. Genetic, Social and General Psychology Monographs 132(1):5-94.
  • D. Heylen B. van Dijk, A. Nijholt (2012) Robotic rabbit companions: Amusing or a nuisance? Journal of Multimodal User Interfaces 5:53-59.
  • C. Huijnen, A. Badii, H. van den Heuvel, P. Caleb-Solly, D. Thiemert (2011) “Maybe It Becomes a Buddy, But Do Not Call It a Robot” – Seamless Cooperation between Companion Robotics and Smart Homes. In D. Keyson, M. Maher, N. Streitz, A. Cheok, J. Augusto, R. Wichert, G. Englebienne, H. Aghajan & B. Kröse (Eds.), Ambient Intelligence Lecture Notes in Computer Science (Vol. 7040, pp. 324-329): Springer Berlin / Heidelberg.
  • H. Hűttenrauch, E. A. Topp, K. Severinson Eklundh (2009) The art of gate-crashing – Bringing HRI into users’ homes. Interaction Studies 10(3): 274-297.
  • I. Iacono, H. Lehmann, P. Marti, B. Robins, K. Dautenhahn (2011) Robots as social mediators for children with autism - A preliminary analysis comparing two different robotic platforms. IEEE ICDL - EPIROB 2011, first Joint IEEE International Conference on Development and Learning and on Epigenetic Robotics. 24-27th August, Frankfurt am Main, Germany.
  • G. Iacuccui and K. Kuuti (2002) Everyday Life as a Stage in Creating and Performing Scenarios for Wireless Devices. Personal and Ubiquitous Computing 6: 299-306.
  • Icub. URL: http://www.icub.org/. Last accessed 16 April 2013.
  • Intuitive Automata: URL: http://www.intuitiveautomata.com/. Last accessed 11 April 2013.
  • A. J. Ijspeert, A. Crespi, M. Cabelguen (2005) Simulation and Robotics Studies of Salamander Locomotion. Applying Neurobiological Principles to the Control of Locomotion in Robots. Neuroinformatics 3(3): 171-196.
  • IROMEC. URL: http://www.iromec.org/. Last accessed 11 April 2013.
  • L. A. Jackson, A. von Eye, H. E. Fitzgerald, E. A. Witt, Y. Zhao (2011) Internet use, videogame playing and cell phone use as predictors of children's body mass index (BMI), body weight, academic performance, and social and overall self-esteem. Comput. Hum. Behav. 27(1): 599-604.
  • P. H. Kahn, Jr., B. Friedman, D. R. Perez-Granados, N. G. Freier (2004) Robotic pets in the lives of preschool children. In CHI '04 Extended Abstracts on Human Factors in Computing Systems (CHI EA '04). ACM, New York, NY, USA, pp. 1449-1452.
  • T. Kanda, R. Sato, N. Saiwaki, H. Ishiguro (2007) A two-month Field Trial in an Elementary School for Long-term Human-robot Interaction. IEEE Transactions on Robotics 23(5): 962-971.
  • T. Kanda, M. Shiomi, Z. Miyashita, H. Ishiguro, N. Hagita (2010) A Communication Robot in a Shopping Mall. IEEE Transactions on Robotics 26(5):.897-913.
  • C. D. Kidd, C. Breazeal (2008) Robots at home: Understanding long-term human-robot interaction. Proceedings 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems, Nice, France, Sept 22-26, pp. 3230-3235.
  • P. Kierkegaard (2008) Video games and aggression. International Journal of Liability and Scientific Enquiry 1(4): 411-417.
  • E. S. Kim, L. D. Berkovits, E. P. Bernier, D. Leyzberg, F. Shic, R. Paul, B. Scassellati (2012) Social robots as embedded reinforcers of social behavior in children with autism. Journal of Autism and Developmental Disorders, DOI 10.1007/s10803-012-1645-2
  • K. L. Koay, G. Lakatos, D. S. Syrdal, M. Gacsi, B. Bereczky, K. Dautenhahn, A. Miklosi, M. L. Walters (2013) Hey! There is someone at your door. A Hearing Robot using Visual Communication Signals of Hearing Dogs to Communicate Intent. IEEE ALIFE 2013 (The 2013 IEEE Symposium on Artificial Life), part of the IEEE Symposium Series on Computational Intelligence 2013, 16 -19 April, Singapore.
  • K.L. Koay, D.S. Syrdal, K. Dautenhahn, K. Arent, L. Malek, and B. Kreczmer (2011) Companion Migration – Initial Participants’ Feedback from a Video-Based Prototyping Study”, Mixed Reality and Human-Robot Interaction, Springer, Series "Intelligent Systems, Control and Automation: Science and Engineering", vol. 47, Xiangyu Wang (Ed.), pp 133-151.
  • K. L. Koay, D. S. Syrdal, M. L. Walters, K. Dautenhahn (2008) Five Weeks in the Robot House - Exploratory Human-Robot Interaction Trials in a Domestic Setting. Proc. ACHI 2009, The Second International Conferences on Advances in Computer-Human Interactions February 1-6, 2009 - Cancun, Mexico, pp. 219-226
  • K. L. Koay, D. S. Syrdal, M. L. Walters, K. Dautenhahn (2009) A User Study on Visualization of Agent Migration between Two Companion Robots. Proc. 13th International Conference on Human-Computer Interaction (HCII 2009), 19-24 July 09, Town and Country Resort & Convention Center, San Diego, CA, USA
  • K. L. Koay, M. L. Walters, S. N. Woods, K. Dautenhahn (2006) Empirical Results from Using a Comfort Level Device in Human-Robot Interaction Studies. Proc. ACM International Conference on Human-Robot Interaction (HRI06), Salt Lake City, Utah, USA, pp. 194 - 201.
  • G. Konidaris, S. Kuindersma, R. Grupen, A. Barto (2012) Robot learning from demonstration by constructing skill trees. The International Journal of Robotics Research 31(3): 360-375.
  • G. Konidaris, S.Kuindersma, R. Grupen, A. Barto (2012) Robot learning from demonstration by constructing skill trees. International Journal of Robotics Research 31(3): 360-375.
  • Lee J. K., Toscano R. L., Stiehl W. D., Breazeal C. (2008) The Design of a Semi-Autonomous Robot Avatar for Family Communication and Education. Proc. 17th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), pp. 166 – 173.
  • LIREC project. URL: http://lirec.eu/project. Last accessed 16 April 2013.
  • K. S. Lohan, K. Pitsch, K. J. Rohlfing, K. Fischer, J. Saunders, H. Lehmann, C. L. Nehaniv, B. Wrede (2011) Contingency allows the robot to spot the tutor and to learn from interaction. Proc. 2011 IEEE International Conference Development and Learning (ICDL), pp. 1-8.
  • M. Lohse, M. Hanheide, B. Wrede, M. L. Walters, K. L. Koay, D. S. Syrdal, A. Green, H. Hüttenrauch, K. Dautenhahn, G. Sagerer, K. Severinson-Eklundh (2008) Evaluating extrovert and introvert behaviour of a domestic robot – a video study. Proc. IEEE RO-MAN 2008, 1-3 August 2008, Technische Universitat Munchen, Munich, Germany, pp. 488-493.
  • M. Lungarella, G. Metta, R. Pfeifer, G. Sandini (2003) Developmental robotics: A Survey. Connection Science 15: 151-190.
  • C. Lyon, C. L Nehaniv, J Saunders (2012) Interactive Language Learning by Robots: The Transition from Babbling to Word Forms. PLoS ONE 7(6): e38236. doi:10.1371/journal.pone.0038236
  • K. F. MacDorman, H. Ishiguro (2006). The uncanny advantage of using androids in social and cognitive science research. Interaction Studies 7(3): 297–337.
  • P. Marti, J. L. Bannon (2009) Exploring User-Centred Design in practice: Some caveats. Knowledge, Technology & Policy 22(1) 7-15.
  • D. Maulsby, S. Greenberg, R. Mander (1993) Prototyping an intelligent agent through Wizard of Oz. In Proc. ACM SIGCHI Conf. on Human Factors in Computing Systems, pp. 277–284. Amsterdam, The Netherlands: ACM Press.
  • G. F. Melson, P. H. Kahn Jr., A. Beck, B. Friedman, T. Roberts, E. Garrett, B. T. Gill (2009) Children’s behavior toward and understanding of robotic and living dogs. Journal of Applied Developmental Psychology 30: 92-102.
  • G. Metta, N. Natale, F. Nori; G. Sandini, D. Vernon, L. Fadiga, C. von Hofsten, K. Rosander; J. Santos-Victo, A. Bernardino, L. Montesano (2010) The iCub Humanoid Robot: An Open-Systems Platform for Research in Cognitive Development. Neural Networks, special issue on Social Cognition: From Babies to Robot. 23(8-9): 1125–1134.
  • F. Mueller, F. Vetere, M. Gibbs, J. Kjeldskov, S. Pedell, S, S. Howard (2005) Hug over a distance. Proc. of Conference on Human Factors in Computing Systems, United States, 02-07 April 2005, pp. 1673-1676.
  • B. Mutlu, T. Kanda, J. Forlizzi, J. Hodgins, H. Ishiguro (2012) Conversational gaze mechanisms for humanlike robots. ACM Transactions on Interactive Intelligent Systems 1(2), 33 pages.
  • C. L. Nehaniv, F. Förster, J. Saunders, F. Broz, E. Antonova, H. Kose, C. Lyon, H. Lehmann, Y. Sato, and K. Dautenhahn (2013) Interaction and Experience in Enactive Intelligence and Humanoid Robotics. Proc. IEEE Symposium on Artificial Life (IEEE ALIFE), IEEE Symposium Series on Computational Intelligence (IEEE SSCI 2013) - Singapore, 15-19 April 2013.
  • C. L. Nehaniv, K. Dautenhahn, Eds. (2007) Imitation and Social Learning in Robots, Humans and Animals: Behavioural, Social and Communicative Dimensions. Cambridge University Press.
  • A. F. Newell, A. Carmichael, M. Morgan, and A. Dickinson (2006) The use of theatre in requirements gathering and usability studies. Interacting With Computers 18(5): 996-1011.
  • S. Nikolaidis, J. Shah (2013) Human-Robot Cross-Training: Computational Formulation, Modeling and Evaluation of a Human Team Training Strategy. Proc. IEEE/ACM International Conference on Human-Robot Interaction, pp. 33-40.
  • S. Nolfi, D. Floreano (2000) Evolutionary robotics: The biology, intelligence and technology of self-organizing machines. Cambridge, MA: MIT Press.
  • F. Papadopoulos, K. Dautenhahn, W. C. Ho (2012a) Exploring the use of robots as social mediators in a remote human-human collaborative communication experiment. Paladyn- Journal of Behavioral Robotics 3(1): 1-10.
  • F. Papadopoulos (2012b) Socially Interactive Robots as Mediators in Human-Human Remote Communication. PhD thesis, University of Hertfordshire. URL: https://uhra.herts.ac.uk/dspace/handle/2299/9151
  • C. Parlitz, K. Dautenhahn, P. Klein, J. Seifert, M. Haegele (2008) Care-o-Bot 3 - Rationale for Human-Robot Interaction Design. Proc. 39th International Symposium on Robotics (ISR 2008), 15-17 October, 2008, Seoul, South Korea.
  • A. Pervez, J. Ryu (2008) Safe Physical Human Robot Interaction–Past, Present and Future, Journal of Mechanical Science and Technology 22(3) 469-483.
  • R. Pfeifer (2007) How the body shapes the way we think: a new view of intelligence. Cambridge, MA: MIT Press.
  • U. Reiser, T. Jacobs, G. Arbeiter, C. Parlitz, K. Dautenhahn (2013) Care-O-bot® 3 – Vision of a Robot Butler. Your Virtual Butler - The Making-of. Publisher: Springer: Lecture Notes in Artificial Intelligence Vol. 7407. Editor(s): Robert Trappl. pp. 97-116.
  • B. Reeves, C. Nass (1996) The media equation: How people treat computers, television, and new media like real people and places. Center for the Study of Language and Information – Lecture Notes.
  • B. Robins, K. Dautenhahn, E. Ferrari, G. Kronreif, B. Prazak-Aram, P. Marti, I. Iacono, G. J. Gelderblom, T. Bernd, F. Caprino, E. Laudanna (2012) Scenarios of robot-assisted play for children with cognitive and physical disabilities. Interaction Studies 13(2): 189-234.
  • B. Robins, E. Ferrari, K. Dautenhahn, G. Kronrief, B. Prazak, G.-J. Gerderblom, F. Caprino, Elena Laudanna, P. Marti (2010) Human-centred design methods: Developing Scenarios for Robot Assisted Play Informed by User Panels and Field Trials. International Journal of Human-Computer Studies IJHCS, 68, pp. 873-898.
  • B. Robins, K. Dautenhahn, P. Dickerson (2009) From Isolation to Communication: A Case Study Evaluation of Robot Assisted Play for Children with Autism with a Minimally Expressive Humanoid Robot. Proc. the Second International Conferences on Advances in Computer-Human Interactions, ACHI 09, February 1-7, 2009 - Cancun, Mexico. Published by IEEE Computer Society Press, pp 205 – 211.
  • B. Robins, K. Dautenhahn (2006). The role of the experimenter in HRI research – a case study evaluation of children with autism interacting with a robotic toy. Proc. 15th IEEE Int. Workshop on Robot and Human Interactive Communication (RO-MAN), pp. 646–651.
  • B. Robins, K. Dautenhahn, R. te Boekhorst, and A. Billard (2005a) Robotic Assistants in Therapy and Education of Children with Autism: Can a Small Humanoid Robot Help Encourage Social Interaction Skills? Special issue "Design for a more inclusive world" of the international journal Universal Access in the Information Society (UAIS) 4(2): 105 – 120.
  • B. Robins, K. Dautenhahn, J. Dubowski (2005b) Robots as isolators or mediators for children with autism? A cautionary tale. Proc. AISB'05 Symposium on Robot Companions Hard Problems and Open Challenges in Human-Robot Interaction, 14-15 April 2005, University of Hertfordshire, UK, pp. 82-88.
  • B. Robins, K. Dautenhahn, J. Dubowski (2004) Investigating Autistic Children's Attitudes Towards Strangers with the Theatrical Robot - A New Experimental Paradigm in Human-Robot Interaction Studies, Proc. IEEE RO-MAN 2004, 13th IEEE International Workshop on Robot and Human Interactive Communication September 20-22, 2004 Kurashiki, Okayama Japan, IEEE Press, pp. 557-562.
  • Robotcub. URL: http://www.robotcub.org/. Last accessed 11 April 2013.
  • F. Rossano (2013) Sequence organization and timing of bonobo mother-infant interactions. Interaction Studies 14(2), special issue on Asymmetry and adaptation in social interaction: A micro-analytic perspective, Eds. I. Nomikou, K. Pitsch, K. K. Rohlfing.
  • S. Sabanovic, M.P. Michalowski, R. Simmons (2006) Robots in the Wild: Observing human-robot social interaction outside the lab. Proceedings of AMC 2006, March 2006, pp. 576-581.
  • M. Salem, S. Kopp, I. Wachsmuth, K. Rohlfing, F. Joublin (2012) Generation and Evaluation of Communicative Robot Gesture. International Journal of Social Robotics 4(2): 201-217.
  • T. Salter, F. Michaud, H. Larouche (2010) How wild is wild? A taxonomy to categorize the wildness of child-robot interaction. International Journal of Social Robotics 2(4):405-415.
  • B. Scassellati, H.y Admoni, M. J. Matarić (2012) Robots for Use in Autism Research. Annual Review of Biomedical Engineering 14:275-294.
  • J. Scholtz (2003) Theory and Evaluation of Human Robot Interactions. In Proceedings of the 36th Annual Hawaii International Conference on System Sciences (HICSS'03) - Track 5 - Volume 5 (HICSS '03), Vol. 5. IEEE Computer Society, Washington, DC, USA.
  • A. Sharkey, N. Sharkey (2011) Children, the Elderly, and Interactive Robots. IEEE Robotics and Automation Magazine 18(1): 32-38.
  • A. Sharkey, N. Sharkey. (2012) Granny and the robots: ethical issues in robot care for the elderly. Ethics and Information Technology 14(1): 27-40.
  • Q. Shen, H. Kose-Bagci, J. Saunders, K. Dautenhahn (2011) The Impact of Participants’ Beliefs on Motor Interference and Motor Coordination in Human-Humanoid Interaction. IEEE TAMD (IEEE Trends in Autonomous Mental Development) 3(1): 6-16.
  • T. Shibata, Y. Kawaguchi, K. Wada (2012) Investigation on People Living with Seal Robot at Home - Analysis of Owners' Gender Differences and Pet Ownership Experience. International Journal of Social Robotics 4(1): 53-63.
  • M. Shimada, T. Kanda (2012) What is the appropriate speech rate for a communication robot? Interaction Studies 13(3): 408-435.
  • J.-Y. Sung, L. Guo, R. E. Grinter, H. I. Christensen H. I. (2007) “My Roomba Is Rambo”: Intimate Home Appliances. Proceedings of the 9th international conference on Ubiquitous computing (UbiComp '07), John Krumm, Gregory D. Abowd, Aruna Seneviratne, and Thomas Strang (Eds.). Springer-Verlag, Berlin, Heidelberg, pp. 145-162.
  • J.-Y. Sung, R. E. Grinter, H. I. Christensen, L. Guo (2008) Housewives or technophiles?: Understanding domestic robot owners. Human-Robot Interaction (HRI), 2008 3rd ACM/IEEE International Conference, pp.129,136.
  • D. S. Syrdal, K. Dautenhahn, M. L. Walters, K. L. Koay, N. Otero (2011) The Theatre methodology for facilitating discussion in human-robot interaction on information disclosure in a home environment. Proceedings RO-MAN 2011, 20th IEEE International Symposium on Robot and Human Interactive Communication, Atlanta, Georgia, USA - 31 July - 3 August 2011, pp. 479 – 484.
  • D. S. Syrdal, K. L. Koay, M. Gacsi, M. L. Walters, K. Dautenhahn (2010) Video Prototyping of Dog-Inspired Non-verbal Affective Communication for an Appearance Constrained Robot. Proceedings IEEE RO-MAN 2010, 19th IEEE International Symposium in Robot and Human Interactive Communication, Sep. 12 - 15th, 2010, Viareggio, Italy, IEEE Press, pp. 632-637.
  • D. S. Syrdal, N. Otero, K. Dautenhahn (2008) Video Prototyping in Human-Robot Interaction: Results from a Qualitative Study. In Proceedings of the European Conference on Cognitive Ergonomics 2008, pp. 132-140. Eurographics Portuguese Chapter: Lisboa.
  • J. K. S. The, A. D. Cheok, R. L. Peiris, Y. Choi, V. Thuong, S. Lai (2008) Huggy Pajama: a mobile parent and child hugging communication system. In Proceedings of the 7th international conference on Interaction design and children (IDC '08). ACM, New York, NY, USA, pp. 250-257.
  • A. L. Thomaz, C. Breazeal (2008) Teachable robots: Understanding human teaching behavior to build more effective robot learners. Artificial Intelligence 172(6-7): 716-737.
  • A.L. Thomaz and M. Cakmak (2009) Learning about objects with human teachers. In Proceedings of the International Conference on Human-Robot Interaction (HRI), pp. 15-22.
  • Edward C. Tolman (1948) Cognitive Maps in rats and men. The Psychological Review, 55(4), 189-208.
  • S. Turkle (2007) Authenticity in the age of digital companions. Interaction Studies 8(3): 501-517.
  • S. Turkle, W. Taggert, C. D. Kidd, O. Dasté (2006) Relational artifacts with children and elders: the complexitites of cybercompanionship. Connection Science 18(4): 347-361.
  • S. Turkle, W. Taggart, C. D. Kidd, O. Daste (2006) Relational artifacts with children and elders: the complexities of cybercompanionship. Connection Science 18(4): 347-361.
  • M. Turner (1996) The Literary Mind: The origins of thought and language. Oxford University Press.
  • D. Vernon, C. von Hofsten, and L. Fadiga (2011) A Roadmap for Cognitive Development in Humanoid Robots. Cognitive Systems Monographs (COSMOS), Vol. 11, Springer, ISBN 978-3-642-16903-8.
  • K. Wada, T. Shibata (2007) Living with seal robots in a care house—Evaluations of social and physiological influences. Proc. IEEE/RSJ Int. Conf. IROS, 2006, pp. 4940–4945.
  • M. L. Walters, M. Lohse, M. Hanheide, B. Wrede, K. L. Koay, D. S. Syrdal, A. Green, H. Hüttenrauch, K. Dautenhahn, G. Sagerer, K. Severinson-Eklund (2011) Evaluating the behaviour of domestic robots using video-based studies. Advanced Robotics Vol. 25 No. 18, pp. 2233-2254.
  • M. L. Walters, D. S. Syrdal, K. Dautenhahn, R. te Boekhorst, K. L. Koay (2008) Avoiding the uncanny valley: robot appearance, personality and consistency of behavior in an attention-seeking home scenario for a robot companion. Autonomous Robots,Volume 24, Number 2 / February, 2008, pp. 159-178
  • B. Webb (2001) Can robots make good models of biological behaviour? Target article for Behavioural and Brain Sciences 24(6): 1033-1050.
  • I. Werry, K. Dautenhahn (2007) Human-Robot Interaction as a Model for Autism Therapy: An Experimental Study with Children with Autism. In Modeling Biology: Structures, Behaviors, Evolution. Manfred Laubichler and Gerd B. Müller eds., Vienna Series in Theoretical Biology, MIT Press, pp. 283-299.
  • I. Werry, K. Dautenhahn, B. Ogden, W. Harwin (2001) Can Social Interaction Skills Be Taught by a Social Agent? The Role of a Robotic Mediator in Autism Therapy. Proc. CT2001, The Fourth International Conference on Cognitive Technology: INSTRUMENTS OF MIND (CT2001), Monday 6th - Thursday 9th August, 2001 at University of Warwick, United Kingdom, Springer Verlag, Lecture Notes in Computer Science, subseries Lecture Notes in Artificial Intelligence.
  • S. Weir, R. Emanuel (1976) Using LOGO to catalyse communication in an autistic child. Technical Report, DAI Research Report No 15, University of Edinburgh.
  • L. Wing, The Autistic Spectrum. London: Constable Press, 1996.
  • S. N. Woods, K. Dautenhahn, C. Kaouri, R. te Boekhorst, K. L. Koay, M. L. Walters (2007) Are Robots Like People? - Relationships between Participant and Robot Personality Traits in Human-Robot Interaction Studies. Interaction Studies 8(2): 281-305.
  • S. Woods, M. L. Walters, K. L. Koay, K. Dautenhahn (2006a) Comparing Human Robot Interaction Scenarios Using Live and Video Based Methods: Towards a Novel Methodological Approach. Proc. AMC'06, The 9th International Workshop on Advanced Motion Control, March 27-29, Istanbul, pp. 750-755.
  • S. Woods, M. L. Walters, K. L. Koay, K. Dautenhahn (2006b) Methodological Issues in HRI: A Comparison of Live and Video-Based Methods in Robot to Human Approach Direction Trials. Proc. The 15th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN06), University of Hertfordshire, 6-8 September, Hatfield, UK, pp. 51-58.
  • F. Yamaoka, T. Kanda, H. Ishiguro, N. Hagita (2007) How contingent should a lifelike robot be? The Relationship between Contingency and Complexity. Connection Science 19(2): 143-162.

38.14 References

Amirabdollahian, F., Akker, R.op den, Bedaf, S., Bormann, R., Draper, H.R, Gelderblom, G. J., Ruiz, C. Gutierrez,Hewson, D., Iacono, I., Koay, K. L., Krose, B., Marti, P., Prevot-Huille, H., Reiser, U., Sorell, T. and Dautenhahn, K. (2013): Acceptable robotiCs COMPanions for AgeiNg Years - Multidimensional Aspects of Human-System Interactions. In: Proceeding of the 6th International Conference on Human System Interaction HSI´2013 June 6-8, 2013, Sopot, Poland.

Anderson, Craig A., Ihori, Nobuko, Bushman, Brad J., Rothstein, Hannah R., Shibuya, Akiko, Swing, Edward L.,Sakamoto, Akira and Saleem, Muniba (2010): Violent video game effects on aggression, empathy, and prosocial behaviour in Eastern and Western countries. In Psychological Bulletin, 136 (2) pp. 151-173

Argall, Brenna D., Chernova, Sonia, Veloso, Manuela and Browning, Brett (2009): A Survey of Robot Learning from Demonstration. In Robotics and Autonomous Systems, 57 (5) pp. 469-483

Asada, Minoru, Hosoda, Koh, Kuniyoshi, Yasuo, Ishiguro, Hiroshi, Inui, Toshio, Yoshikawa, Yuichiro, Ogino, Masaki and Yoshida, Chisato (2009): Cognitive Developmental Robotics: A Survey. In IEEE Transactions on Autonomous Mental Development, 1 (1) pp. 12-34

Association, American Psychiatric (2000): The Diagnostic and Statistical Manual of Mental Disorders. Arlington, USA, American Psychiatric Publishing

Association, The British Automation and Robot (2013). A practical guide to machine safety application, legislation and standards. Retrieved 30 May 2013 from Bara.org.uk: http://www.bara.org.uk/info/safety/A_Practical_Gui...

Aurora project. URL: http://www.aurora-project.com/. Last accessed 16 April 2013.

BARA (The British Automation and Robot Association). A practical guide to machine safety application, legislation and standards. URL: http://www.bara.org.uk/info/info_safety.html. Accessed December, 2012.

Bartneck, Christoph and Hu, Jun (2004): Rapid Prototyping for Interactive Robots. In: Proceedings of the 8th Conference on Intelligent Autonomous Systems IAS-8 2004, Amsterdam, Netherlands. pp. 136-145

Bekele, Esubalew T., Lahiri, Utama, Swanson, Amy R., Crittendon, Julie A., Warren, Zachary E. and Sarkar, Nilanjan (2013): A Step Towards Developing Adaptive Robot-Mediated Intervention Architecture (ARIA) for Children With Autism. In IEEE Transactions on Neural Systems and Rehabilitation Engineering, 22 (2) pp. 289-299

Billard, Aude, Robins, Ben, Nadel, Jacqueline and Dautenhahn, Kerstin (2006): Building Robota, a Mini-Humanoid Robot for the Rehabilitation of Children with Autism. In RESNA Assistive Technology Journal, 19 (1) pp. 37-49

Breazeal, Cynthia, Brooks, Andrew G., Gray, Jesse, Hoffman, Guy, Kidd, Cory D., Lee, Hans, Lieberman, Jeff,Lockerd, Andrea and Chilongo, David (2004): Tutelage and Collaboration for Humanoid Robots. InInternational Journal of Humanoid Robot, 1 (2) pp. 315-348

Calinon, Sylvain, D'halluin, Florent, Sauser, Eric, Caldwell, Darwin and Billard, Aude (2010): Learning and reproduction of gestures by imitation: An approach based on Hidden Markov Model and Gaussian Mixture Regression. In IEEE Robotics and Automation Magazine, 17 (2) pp. 44-54

Cangelosi, Angelo, Metta, Giorgio, Sagerer, Gerhard, Nolfi, Stefano, Nehaniv, Chrystopher, Fischer, Kerstin, Tani, Jun, Belpaeme, Tony, Sandini, Giulio, Fadiga, Luciano, Wrede, Britta, Rohlfing, Katharina, Tuci, Elio,Dautenhahn, Kerstin, Saunders, Joe and Zeschel, Arne (2010): Integration of Action and Language Knowledge: A Roadmap for Developmental Robotics. In IEEE Transactions on Autonomous Mental Development, 2 (3) pp. 167-195

Chatley, Amiy R., Dautenhahn, Kerstin, Walters, Mick L., Syrdal, Dag S. and Christianson, Bruce (2010): Theatre as a Discussion Tool in Human-Robot Interaction Experiments - A Pilot Study. In: Conference on Advances in Computer-Human Interactions ACHI 2010 February 10-16, 2010, St. Maarten, Netherlands Antilles. pp. 73-78

COGNIRON. URL: http://www.cogniron.org/final/Home.php. Last accessed 11 April 2013

Cuijpers, Raymond H., Bruna, Maarten T., Ham, Jaap R. C. and Torta, Elena (2011): Attitude towards Robots Depends on Interaction But Not on Anticipatory Behaviour. In: Mutlu, Bilge, Bartneck, Christoph, Ham, Jaap,Evers, Vanessa and Kanda, Takayuki (eds.). "Social Robotics Lecture Notes in Computer Science". Berlin, Germany: pp. 163-172

Dahlback, Nils, Jonsson, Arne and Ahrenberg, Lars (1993): Wizard of Oz Studies -- Why and How. In: Gray, Wayne D., Hefley, William and Murray, Dianne (eds.) International Workshop on Intelligent User Interfaces 1993January 4-7, 1993, Orlando, Florida, USA. pp. 193-200

Dautenhahn, Kerstin (1999): Robots as Social Actors: AURORA and The Case of Autism. In: Proceedings Third Cognitive Technology Conference CT99 August, 1999, San Francisco, USA.

Dautenhahn, Kerstin (2007b): Methodology and Themes of Human-Robot Interaction: A Growing Research Field. In International Journal of Advanced Robotic Systems, 4 (1) pp. 103-108

Dautenhahn, Kerstin (2003): Roles and functions of robots in human society: implications from research in autism therapy. In Robotica, 21 (4) pp. 443-452

Dautenhahn, Kerstin (1995): Getting to know each other-Artificial social intelligence for autonomous robots. InRobotics and Autonomous Systems, 16 (2) pp. 333-356

Dautenhahn, Kerstin (1994): Trying to Imitate - a Step Towards Releasing Robots from Social Isolation. In: Gaussier, Philippe and Nicoud, Jean-Daniel (eds.) Proceedings From Perception to Action Conference September 7-9, 1994, Lausanne, Switzerland. pp. 290-301

Dautenhahn, Kerstin (2007a): Socially intelligent robots: dimensions of human - robot interaction. In Philosophical Transactions of the Royal Society B: Biological Sciences, 362 (1480) pp. 679-704

Dautenhahn, Kerstin and Robins, Ben (2006): The role of the experimenter in HRI research - a case study evaluation of children with autism interacting with a robotic toy. In: The 15th IEEE International Symposium on Robot and Human Interactive Communication RO-MAN 2006 September 6-8, 2006, Hatfield, United Kingdom. pp. 646-651

Dautenhahn, Kerstin and Werry, Iain (2004): Towards Interactive Robots in Autism Therapy: Background, Motivation and Challenges. In Pragmatics and Cognition, 12 (1) pp. 1-35

Dautenhahn, Kerstin, Woods, Sarah, Kaouri, Christina, Walters, Michael L., Koay, Kheng Lee and Werry, Iain (2005): What is a Robot Companion - Friend, Assistant or Butler. In: IEEE IRS/RSJ International Conference on Intelligent Robots and Systems August 2-6, 2005, Edmonton, Canada. pp. 1488-1493

Davis, Phyllis K. (1999): The Power of Touch - The Basis for Survival, Health, Intimacy, and Emotional Well-Being.Carlsbad,California USA, Hay House Inc

Dennett, Daniel C. (1989): The origins of selves. In Cogito, 3 (3) pp. 163-173

Derbinsky, Nate, Ho, Wan Ching, Duque, Ismael, Saunders, Joe and Dautenhahn, Kerstin (2013): Resource-Efficient Methods for Feasibility Studies of Scenarios for Long-Term HRI Studies. In: Miller, Leslie (ed.) February 24- March 1, 2013, Nice,France. pp. 95-101

Dickerson, Paul, Robins, Ben and Dautenhahn, Kerstin (2013): Where the action is: A conversation analytic perspective on interaction between a humanoid robot, a co-present adult and a child with an ASD. InInteraction Studies, 14 (2) pp. 296-316

Diehl, Joshua J., Schmitt, Lauren M., Villano, Michael and Crowell, Charles R. (2012): The clinical use of robots for individuals with Autism Spectrum Disorders: A critical review. In Research in Autism Spectrum Disorders, 6 (1) pp. 249-262

Duffy, Brian R. (2003): Anthropomorphism and the social robot. In Robotics and Autonomous Systems, 42 (3) pp. 177-190

Dye, Matthew W.G., Green, Shawn C. and Bavelier, Daphne (2009): Increasing speed of processing with action video games. In Current Directions in Psychological Science, 18 (6) pp. 321-326

François, Dorothée, Polani, Daniel and Dautenhahn, Kerstin (2008): Towards Socially Adaptive Robots: A Novel Method for Real Time Recognition of Human-Robot Interaction Styles. In: 8th IEEE-RAS International Conference on Humanoid Robots Humanoids 2008 December 1-3, 2008, Daejeon, Korea. pp. 353-359

Goodrich, Michael A. and Schultz, Alan C. (2007): Human-Robot Interaction: A Survey. In Foundations and Trends in Human-Computer Interaction, 1 (3) pp. 203-275

Gould, John D., Conti, John and Hovanyecz, Todd (1983): Composing letters with a simulated listening typewriter. In Communications of the ACM, 26 (4) pp. 295-308

Green, Anders, Hüttenrauch, Helge and Eklundh, Kerstin Severinson (2004): Applying the Wizard of Oz Framework to Cooperative Service discovery and Configuration. In: ROMAN 2004 13th IEEE International Workshop on Robot and Human Interactive Communication September 20-22, 2004, Kurashiki, Okayama Japan. pp. 575-580

Griffiths, Mark (2002): The educational benefits of video games. In Education and Health, 20 (3) pp. 47-51

Harvey, Inman, Paolo, Ezequiel Di, Wood, Rachel, Quinn, Matt and Tuci, Elio (2005): Evolutionary Robotics: A New Scientific Tool for Studying Cognition. In Artificial Life, 11 (1) pp. 79-98

Heinzmann, Jochen and Zelinsky, Alexander (2003): Quantitative safety guarantees for physical human-robot interaction. In The International Journal of Robotics Research, 22 (7) pp. 479-504

Herrmann, Guido and Melhuish, Chris (2010): Towards safety in human-robot interaction. In International Journal of Social robotics, 2 (3) pp. 217-219

Hertenstein, Matthew J., Verkamp, Julie M., Kerestes, Alyssa M. and Holmes, Rachel M. (2006): The communicative functions of touch in humans, non-human primates, and rats: A review and synthesis of the empirical research. In Genetic, Social and General Psychology Monographs, 132 (1) pp. 5-94

Heylen, Dirk, Dijk, Betsy van and Nijholt, Anton (2012): Robotic rabbit companions: Amusing or a nuisance. InJournal of Multimodal User Interfaces, 5 (1) pp. 53-59

Huijnen, Claire, Badii, Atta, Heuvel, Herjan van den, Caleb-Solly, Praminda and Thiemert, Daniel (2011): "Maybe It Becomes a Buddy, But Do Not Call It a Robot" – Seamless Cooperation between Companion Robotics and Smart Homes. In: Keyson, David V., Maher, Mary Lou, Streitz, Norbert, Cheok, Adrian D., Augusto, Juan C., Wichert, Reiner, Englebienne, Gwenn, Aghajan, Hamid K. and Kröse, Ben J. A. (eds.). "Ambient Intelligence Lecture Notes in Computer Science". Berlin, Germany: Springer Linkpp. 324-329

Hüttenrauch, Helge, A.Topp, Elin and Eklundh, Kerstin Severinson (2009): The Art of Gate-Crashing Bringing HRI into users' homes. In Interaction Studies, 10 (3) pp. 274-297

Iacono, Iolanda, Lehmann, Hagen, Marti, Patrizia, Robins, Ben and Dautenhahn, Kerstin (2011): Robots as social mediators for children with autism - A preliminary analysis comparing two different robotic platforms. In: IEEE ICDL - EPIROB 2011, first Joint IEEE International Conference on Development and Learning and on Epigenetic Robotics August 24-27, 2011, Frankfurt, Germany. pp. 1-6

Iacucci, Giulio and Kuutti, Kari (2002): Everyday Life as a Stage in Creating and Performing Scenarios for Wireless Devices. In Personal and Ubiquitous Computing, 6 (4) pp. 299-306

Ijspeert, Auke J., Crespi, Alessandro and Cabelguen, Jean-Marie (2005): Simulation and Robotics Studies of Salamander Locomotion. Applying Neurobiological Principles to the Control of Locomotion in Robots. InNeuroinformatics, 3 (3) pp. 171-196

Jackson, Linda A., Eye, Alexander von, Fitzgerald, Hiram E., Witt, Edward A. and Zhao, Yong (2011): Internet use, videogame playing and cell phone use as predictors of children's body mass index (BMI), body weight, academic performance, and social and overall self-esteem. In Computers in Human Behavior, 27 (1) pp. 599-604

Jr, Peter H. Kahn,, Friedman, Batya, Perez-Granados, Deanne R. and Freier, Nathan G. (2004): Robotic pets in the lives of preschool children. In: CHI 04 Extended Abstracts on Human Factors in Computing Systems April 24-29, 2004, Vienna, Austria. pp. 1449-1452

Kanda, Takayuki, Sato, Rumi, Saiwaki, Naoki and Ishiguro, Hiroshi (2007): A two-month Field Trial in an Elementary School for Long-term Human-robot Interaction. In IEEE Transactions on Robotics, 23 (5) pp. 962-971

Kanda, Takayuki, Shiomi, Masahiro, Miyashita, Zenta, Ishiguro, Hiroshi and Hagita, Norihiro (2010): A Communication Robot in a Shopping Mall. In IEEE Transactions on Robotics, 26 (5) pp. 897-913

Kidd, Cory D. and Breazeal, Cynthia (2008): Robots at home: Understanding long-term human-robot interaction. In: Proceedings 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems September 22-26, 2008, Nice,France. pp. 3230-3235

Kierkegaard, Patrick (2008): Video games and aggression. In International Journal of Liability and Scientific Enquiry, 1 (4) pp. 411-417

Kim, Elizabeth S., Berkovits, Lauren D., Bernier, Emily P., Leyzberg, Dan, Shic, Frederick, Paul, Rhea and Scassellati, Brian (2012): Social Robots as Embedded Reinforcers of Social Behavior in Children with Autism. In Journal of Autism and Developmental Disorders, 43 (5) pp. 1038-1049

Koay, K. L., Dautenhahn, K., Woods, S. N. and Walters, M. L. (2006): Empirical results from using a comfort level device in human-robot interaction studies. In: Proceedings of the 1st ACM SIGCHI/SIGART Conference on Human-Robot Interaction 2006. pp. 194-201

Koay, K. L., Lakatos, G., Syrdal, D.S., Gácsi, M., Bereczky, B., Dautenhahn, Kerstin, Miklosi, A. and Walters, M. L. (2013): Hey! There is someone at your door. A Hearing Robot using Visual Communication Signals of Hearing Dogs to Communicate Intent. In: IEEE ALIFE 2013 April 16-19, 2013, Singapore,Singapore.

Koay, Kheng Lee, Syrdal, Dag Sverre, Walters, Michael L. and Dautenhahn, Kerstin (2009): Five Weeks in the Robot House -- Exploratory Human-Robot Interaction Trials in a Domestic Setting. In: Proceedings of the 2009 International Conference on Advances in Computer-Human Interactions 2009. pp. 219-226

Koay, Kheng Lee, Syrdal, Dag Sverre, Walters, Michael L. and Dautenhahn, Kerstin (2009): A User Study on Visualization of Agent Migration between Two Companion Robots. In: 13th International Conference on Human-Computer Interaction HCII 2009 July 19-24, 2009, San Diego, USA.

Koay, K. L., Syrdal, D. S., Dautenhahn, K., Arent, K., Małek, Ł and Kreczmer, B. (2011): Companion Migration – Initial Participants' Feedback from a Video-Based Prototyping Study. In: Wang, Xiangyu (ed.). "Mixed Reality and Human-Robot Interaction". Dordrecht, Netherlands: Springer Linkpp. 133-151

Konidaris, George, Kuindersma, Scott, Grupen, Roderic and Barto, Andrew (2012): Robot learning from demonstration by constructing skill trees. In The International Journal of Robotics Research, 31 (3) pp. 360-375

Lee, Jun Ki, Toscano, Robert Lopez, Stiehl, Walter D. and Breazeal, Cynthia (2008): The Design of a Semi-Autonomous Robot Avatar for Family Communication and Education. In: Buss, Martin and Kühnlenz, Kolja (eds.) Proceeding of 17th IEEE International Symposium on Robot and Human Interactive Communication RO-MAN August 1-3, 2008, Munich, Germany. pp. 166-173

Lohan, Katrin S., Pitsch, Karola, Rohlfing, Katharina J., Fischer, Kerstin, Saunders, Joe, Lehmann, H., Nehaniv, Christopher L. and Wrede, Britta (2011): Contingency allows the robot to spot the tutor and to learn from interaction. In: IEEE International Conference on Development and Learning ICDL 2011 August 24-27, 2011, Frankfurt, Germany. pp. 1-8

Lohse, Manja, Hanheide, Marc, Wrede, Britta, Walters, Michael L., Koay, Kheng L., Syrdal, Dag S., Green, Anders,Hüttenrauch, Helge, Dautenhahn, Kerstin, Sagerer, Gerhard and Severinson-Eklundh, Kerstin (2008): Evaluating extrovert and introvert behaviour of a domestic robot - a video study. In: Proceedings of the IEEE 17th International Symposium on Robot and Human Interactive Communication RO-MAN 2008 August 1-3, 2008, Munich, Germany. pp. 488-493

Lungarella, Max, Metta, Giorgio, Pfeifer, Rolf and Sandini, Giulio (2003): Developmental robotics: A Survey. InDevelopmental robotics: a survey, 15 (4) pp. 151-190

Lyon, Caroline, Nehaniv, Chrystopher L. and Saunders, Joe (2012): Interactive Language Learning by Robots: The Transition from Babbling to Word Forms. In PLoS One, 7 (6)

MacDorman, Karl F. and Ishiguro, Hiroshi (2006): The uncanny advantage of using androids in cognitive and social science research. In Interaction Studies, 7 (3) pp. 297-337

Marti, Patrizia and Bannon, Liam J. (2009): Exploring User-Centred Design in practice: Some caveats. InKnowledge, Technology & Policy, 22 (1) pp. 7-15

Maulsby, David, Greenberg, Saul and Mander, Richard (1993): Prototyping an Intelligent Agent through Wizard of Oz. In: Ashlund, Stacey, Mullet, Kevin, Henderson, Austin, Hollnagel, Erik and White, Ted (eds.) Proceedings of the ACM CHI 93 Human Factors in Computing Systems Conference April 24-29, 1993, Amsterdam, The Netherlands. pp. 277-284

Melson, Gail F., Jr., Peter H. Kahn, Beck, Alan, Friedman, Batya, Roberts, Trace, Garrett, Erik and Gill, Brian T. (2009): Children's behavior toward and understanding of robotic and living dogs. In Journal of Applied Developmental Psychology, 30 (2) pp. 92-102

Metta, Giorgio, Natale, Lorenzo, Nori, Francesco, Sandini, Giulio, Vernon, David, Fadiga, Luciano, Hofsten, Claes von, Rosander, Kerstin, Lopes, Manuel, Santos-Victor, José, Bernardino, Alexandre and Montesano, Luis (2010):The iCub Humanoid Robot: An Open-Systems Platform for Research in Cognitive Development. In Neural Networks, 23 (8) pp. 1125-1134

Mueller, Florian, Vetere, Frank, Gibbs, Martin R., Kjeldskov, Jesper, Pedell, Sonja and Howard, Steve (2005): Hug over a distance. In: Proceedings of ACM CHI 2005 Conference on Human Factors in Computing Systems 2005. pp. 1673-1676

Mutlu, Bilge, Kanda, Takayuki, Forlizzi, Jodi, Hodgins, Jessica and Ishiguro, Hiroshi (2012): Conversational gaze mechanisms for humanlike robots. In ACM Transactions on Interactive Intelligent Systems, 1 (2) p. 33

Nehaniv, Chrystopher L. and Dautenhahn, Kerstin (eds.) (2007): Imitation and Social Learning in Robots, Humans and Animals: Behavioural, Social and Communicative Dimensions. Cambridge, United Kingdom, Cambridge University Press

Nehaniv, Chrystopher L., Förster, Frank, Saunders, Joe, Broz, Frank, Antonova, Elena, Köse, Hatice, Lyon, Caroline,Lehmann, Hagen, Sato, Yo and Dautenhahn, Kerstin (2013): Interaction and Experience in Enactive Intelligence and Humanoid Robotics. In: In proceeding of IEEE Symposium on Artificial Life IEEE ALIFE, IEEE Symposium Series on Computational Intelligence IEEE SSCI 2013 April 15-19, 2013, Singapore,Singapore.

Newell, Alan F., Carmichael, A., Morgan, M. and Dickinson, A. (2006): The use of theatre in requirements gathering and usability studies. In Interacting with Computers, 18 (5) pp. 996-1011

Nikolaidis, Stefanos and Shah, Julie (2013): Human-Robot Cross-Training: Computational Formulation, Modeling and Evaluation of a Human Team Training Strategy. In: Proceedings of the 8th ACM/IEEE international conference on Human-robot interaction March 3-6, 2013, Tokyo, Japan. pp. 33-40

Nolfi, Stefano and Floreano, Dario (2000): Evolutionary robotics: The biology, intelligence and technology of self-organizing machines. Cambridge, USA, MIT Press

O'Hare, Gregory M.P., Duffy, Brian R., Bradley, John F. and Martin, Alan N. (2003): Agent Chameleons: Moving Minds from Robots to Digital Information Spaces. In: Proceedings of Autonomous Minirobots for Research and Edutainment 2003. pp. 18-21

Papadopoulos, Fotios (2012). Socially Interactive Robots as Mediators in Human-Human Remote Communication. University of Hertfordshirehttp://uhra.herts.ac.uk/bitstream/handle/2299/9151/08171963%20Papadopoulos%20Fotios%20-%20final%20PhD%20submission.pdf;jsessionid=2854AA98DFF6A825C09F0D144F5ADBD2?sequence=1

Papadopoulos, Fotios, Dautenhahn, Kerstin and Ho, Wan Ching (2012): Exploring the use of robots as social mediators in a remote human-human collaborative communication experiment. In Paladyn- Journal of Behavioural Robotics, 3 (1) pp. 1-10

Parlitz, Christopher, Hägele, Martin, Klein, Peter, Seifert, Jan and Dautenhahn, Kerstin (2008): Care-o-Bot 3 - Rationale for Human-Robot Interaction Design. In: Proceedings of 39th International Symposium on Robotics ISR 2008 October 15-17, 2008, Seoul, South Korea. pp. 275-280

Pervez, Aslam and Ryu, Jeha (2008): Safe Physical Human Robot Interaction-Past, Present and Future. In Journal of Mechanical Science and Technology, 22 (3) pp. 469-483

Pfeifer, Rolf (2007): How the body shapes the way we think: a new view of intelligence. Cambridge, USA, MIT Press

Reeves, Byron and Nass, Clifford (1996): The media equation: How people treat computers, television and new media like real people and places. Cambridge University Press

Reiser, Ulrich, Parlitz, Christopher and Klein, Peter (2013b): Care-O-bot® 3 – Vision of a robot butler. In: Trappl, Robert (ed.). "Your Virtual Butler - The Making-of". Springer Linkpp. 97-116

Reiser, Ulrich, Jacobs, Theo, Arbeiter, Georg, Parlitz, Christopher and Dautenhahn, Kerstin (2013a): Care-O-bot® 3 - Vision of a Robot Butler. Your Virtual Butler - The Making-of. In: Trappl, Robert (ed.). "Lecture Notes in Artificial Intelligence". Berlin, Germany: Springer Linkpp. 97-116

Robins, Ben, Dautenhahn, Kerstin and Dubowski, Janek (2004): Investigating Autistic Children's Attitudes Towards Strangers with the Theatrical Robot - A New Experimental Paradigm in Human-Robot Interaction Studies. In:13th IEEE International Workshop on Robot and Human Interactive Communication, ROMAN 2004 September 20-22, 2004, Kurashiki, Okayama Japan. pp. 557-562

Robins, Ben, Dautenhahn, Kerstin and Dubowski, Janek (2005): Robots as isolators or mediators for children with autism A cautionary tale. In: Proceedings of the Symposium on Robot Companions Hard Problems and Open Challenges in Human-Robot Interaction AISB 05 April 14-15, 2005, Hatfield, United Kingdom. pp. 82-88

Robins, Ben, Ferrari, Ester, Dautenhahn, Kerstin, Kronreif, Gernot, Prazak-Aram, Barbara, Gelderblom, Gert-jan,Tanja, Bernd, Caprino, Francesca, Laudanna, Elena and Marti, Patrizia (2010): Human-centred design methods: Developing scenarios for robot assisted play informed by user panels and field trials. In International Journal of Human-Computer Studies, 68 (12) pp. 873-898

Robins, B., Dautenhahn, K., Boekhorst, R. Te and Billard, A. (2005a): Robotic assistants in therapy and education of children with autism: can a small humanoid robot help encourage social interaction skills?. In Universal Access in the Information Society, 4 (2) pp. 105-120

Robins, Ben, Dautenhahn, Kerstin and Dickerson, Paul (2009): From Isolation to Communication: A Case Study Evaluation of Robot Assisted Play for Children with Autism with a Minimally Expressive Humanoid Robot. In:Proceedings of the 2009 International Conference on Advances in Computer-Human Interactions 2009. pp. 205-211

Robins, Ben, Dautenhahn, Kerstin, Ferrari, Ester, Kronreif, Gernot, Prazak-Aram, Barbara, Marti, Patrizia, Iacono, Iolanda, Gelderblom, Gert Jan, Bernd, Tanja, Caprino, Francesca and Laudanna, Elena (2012): Scenarios of robot-assisted play for children with cognitive and physical disabilities. In Interaction Studies, 13 (2) pp. 189-234

Rossano, Federico (2013): Sequence organization and timing of bonobo mother-infant interactions. In Interaction Studies, 14 (2) pp. 160-189

Sabanovic, Selma, Michalowski, Marek P. and Simmons, Reid (2006): Robots in the Wild: Observing human-robot social interaction outside the lab. In: Proceedings of the 9th International Workshop on Advanced Motion Control AMC 2006 March 27-29, 2006, Istanbul, Turkey. pp. 576-581

Salem, Maha, Kopp, Stefan, Wachsmuth, Ipke, Rohlfing, Katharina and Joublin, Frank (2012): Generation and Evaluation of Communicative Robot Gesture. In International Journal of Social robotics, 4 (2) pp. 201-217

Salter, Tamie, Michaud, François and Larouche, Hélène (2010): How wild is wild A taxonomy to categorize the wildness of child-robot interaction. In International Journal of Social robotics, 2 (4) pp. 405-415

Santis, Agostino De, Siciliano, Bruno, Luca, Alessandro De and Bicchi, Antonio (2008): An atlas of physical human-robot interaction. In Mechanism and Machine Theory, 43 (3) pp. 253-270

Scassellati, Brian, Admoni, Henny Y. and Matarić, Maja J. (2012): Robots for Use in Autism Research. In Annual Review of Biomedical Engineering, 14 pp. 275-294

Scholtz, Jean (2003): Theory and Evaluation of Human Robot Interactions. In: Proceedings of the 36th Annual Hawaii International Conference on System Sciences HICSS03 January 6-9, 2003, Hawaii, USA. p. 125

Schulz, R., Beach, S.R., Matthews, J. Tabolt, Courtney, K.L. and Darbbs, A.J. De Vito (2012): Designing and evaluating quality of life technologies: An interdisciplinary approach. In Proceedings of the IEEE, 100 (8) pp. 2397-2409

Sharkey, Amanda and Sharkey, Noel (2011): Children, the Elderly, and Interactive Robots. In IEEE Robotics and Automation Magazine, 18 (1) pp. 32-38

Sharkey, Amanda and Sharkey, Noel (2012): Granny and the robots: ethical issues in robot care for the elderly. InEthics and Information Technology, 14 (1) pp. 27-40

Shen, Qiming, Kose-Bagci, Hatice, Saunders, Joe and Dautenhahn, Kerstin (2011): The Impact of Participants' Beliefs on Motor Interference and Motor Coordination in Human-Humanoid Interaction. In IEEE Trends in Autonomous Mental Development, 3 (1) pp. 6-16

Shibata, Takanori, Kawaguchi, Yukitaka and Wada, Kazuyoshi (2012): Investigation on People Living with Seal Robot at Home - Analysis of Owners' Gender Differences and Pet Ownership Experience. In International Journal of Social robotics, 4 (1) pp. 56-63

Shimada, Michihiro and Kanda, Takayuki (2012): What is the appropriate speech rate for a communication robot. In Interaction Studies, 13 (3) pp. 408-435

Sung, Ja-Young, Guo, Lan, Grinter, Rebecca E. and Christensen, Henrik I. (2007): "My Roomba Is Rambo": Intimate Home Appliances. In: Krumm, John, Abowd, Gregory D., Seneviratne, Aruna and Strang, Thomas (eds.)UbiComp 2007 Ubiquitous Computing - 9th International Conference September 16-19, 2007, Innsbruck, Austria. pp. 145-162

Sung, Ja-Young, Grinter, Rebecca E., Christensen, Henrik I. and Guo, Lan (2008): Housewives or technophiles?: understanding domestic robot owners. In: Proceedings of the 3rd ACM/IEEE International Conference on Human Robot Interaction 2008. pp. 129-136

Syrdal, Dag Sverre, Otero, Nuno and Dautenhahn, Kerstin (2008): Video Prototyping in Human-Robot Interaction: Results from a Qualitative Study. In: Abascal, Julio, Fajardo, Inmaculada and Oakley, Ian (eds.) Proceedings of the European Conference on Cognitive Ergonomics 2008 September 16-19, 2008, Madeira, Portugal. pp. 132-140

Syrdal, Dag Sverre, Dautenhahn, Kerstin, Walters, Michael L., Koay, Kheng Lee and Otero, Nuno R. (2011): The Theatre methodology for facilitating discussion in human-robot interaction on information disclosure in a home environment. In: Proceedings RO-MAN 2011, 20th IEEE International Symposium on Robot and Human Interactive Communication July 31- August 3, 2011, Georgia, USA. pp. 479-484

Syrdal, Dag Sverre, Koay, Kheng L., Gácsi, Márta, Walters, Michael L. and Dautenhahn, Kerstin (2010): Video Prototyping of Dog-Inspired Non-verbal Affective Communication for an Appearance Constrained Robot. In:Proceedings IEEE RO-MAN 2010, 19th IEEE International Symposium in Robot and Human Interactive Communication September 12-15, 2010, Viareggio, Italy. pp. 632-637

Teh, James Keng Soon, Cheok, Adrian David, Peiris, Roshan L., Choi, Yongsoon, Thuong, Vuong and Lai, Sha (2008): Huggy Pajama: a mobile parent and child hugging communication system. In: Proceedings of ACM IDC08 Interaction Design and Children 2008. pp. 250-257

Thomaz, Andrea L. and Breazea, Cynthia (2008): Teachable robots: Understanding human teaching behavior to build more effective robot learners. In Artificial Intelligence, 172 (6) pp. 716-737

Thomaz, Andrea L. and Cakmak, Maya (2009): Learning about objects with human teachers. In: Proceedings of the 4th ACM/IEEE International Conference on Human Robot Interaction 2009. pp. 15-22

Tolman, Edward C. (1948): Cognitive Maps in rats and men. In The Psychological Review, 55 (4) pp. 189-208

Turkle, Sherry (2007): Authenticity in the age of digital companions. In Interaction Studies, 8 (3) pp. 501-517

Turkle, Sherry, Taggart, Will, Kidd, Cory D. and Dasté, Olivia (2006): Relational artifacts with children and elders: the complexitites of cybercompanionship. In Connection Science, 18 (4) pp. 347-361

Turner, Mark (1996): The Literary Mind: The Origins of Thought and Language. New York, USA, Oxford University Press

Vernon, David, Hofsten, Claes von and Fadiga, Luciano (2011): A Roadmap for Cognitive Development in Humanoid Robots. Berlin,Germany, Springer

Wada, Kazuyoshi and Shibata, Takanori (2007): Living with seal robots in a care house-Evaluations of social and physiological influences. In: IEEE/RSJ International Conference on Intelligent Robots and Systems, 2006October 9-15, 2007, Beijing, China. pp. 4940-4945

Walters, Michael L., Lohse, Manja, Hanheide, Marc, Wrede, Britta, Syrdal, Dag Sverre, Koay, Kheng L., Green, Anders, Hüttenrauch, Helge, Dautenhahn, Kerstin, Sagerer, Gerhard and Eklundh, Kerstin Severinson (2011):Evaluating the behaviour of domestic robots using video-based studies. In Advanced Robotics, 25 (18) pp. 2233-2254

Walters, Michael L., Syrdal, Dag S., Dautenhahn, Kerstin, Boekhorst, Rene te and Koay, Kheng L. (2008): Avoiding the uncanny valley: robot appearance, personality and consistency of behavior in an attention-seeking home scenario for a robot companion. In Autonomous Robots, 24 (2) pp. 159-178

Webb, Barbara (2001): Can robots make good models of biological behaviour?. In Behavioural and Brain Sciences, 24 (6) pp. 1033-1050

Weir, Sylvia and Emanuel, Ricky (1976). Using LOGO to catalyse communication in an autistic child. University of Edinburgh

Werry, Iain and Dautenhahn, Kerstin (2007): Human-Robot Interaction as a Model for Autism Therapy: An Experimental Study with Children with Autism. In: Laubichler, Manfred D. and Müller, Gerd B. (eds.). "Modeling Biology: Structures, Behaviors, Evolution (Vienna Series in Theoretical Biology". Massachusetts, USA: MIT Presspp. 283-299

Werry, Iain, Dautenhahn, Kerstin, Ogden, Bernard and Harwin, William (2001): Can Social Interaction Skills Be Taught by a Social Agent The Role of a Robotic Mediator in Autism Therapy. In: Nehaniv, Chrystopher L.,Dautenhahn, Kerstin and Beynon, Meurig (eds.) Proceedings of the 4th International Conference on Cognitive Technology Instruments of Mind CT2001 August 6-9, 2001, Warwick, United Kingdom. pp. 57-74

Wing, Lorna (1996): The Autistic Spectrum. Constable Press

Woods, Sarah N., Walters, Michael L., Koay, Kheng Lee and Dautenhahn, Kerstin (2006): Methodological Issues in HRI: A Comparison of Live and Video-Based Methods in Robot to Human Approach Direction Trials. In:Proceedings of The 15th IEEE International Symposium on Robot and Human Interactive RO-MAN06September 6-8, 2006, Hatfield, United Kingdom. pp. 51-58

Woods, Sarah, Dautenhahn, Kerstin, Kaouri, Christina, Boekhorst, Rene te, Koay, Kheng Lee and Walters, Michael L. (2007): Are Robots Like People - Relationships between Participant and Robot Personality Traits in Human-Robot Interaction Studies. In Interaction Studies, 8 (2) pp. 281-305

Woods, Sarah N., Walters, Michael L., Koay, Kheng Lee and Dautenhahn, Kerstin (2006): Comparing Human Robot Interaction Scenarios Using Live and Video Based Methods: Towards a Novel Methodological Approach. In: The 9th IEEE International Workshop on Advanced Motion Control AMC06 March 27-29, 2006, Istanbul, Turkey. pp. 770-775

Yamaoka, Fumitaka, Kanda, Takayuki, Ishiguro, Hiroshi and Hagita, Norihiro (2007): How contingent should a lifelike robot be? The relationship between contingency and complexity. In Connection Science, 19 (2) pp. 143-162

Yamazaki, Kimitoshi, Ueda, Ryohei, Nozawa, Shunichi, Kojima, Mitsuharu, Okada, Kei, Matsumoto, Kiyoshi,Ishikawa, Masaru, Shimoyama, Isao and Inaba, Masayuki (2012): Home-assistant robot for an aging society. InProceedings of the IEEE, 100 (8) pp. 2429-2441

38.15 References