Begüm Cerrahoğlu, İlayda Güneysu, and Selin Yılmaz | Bilkent University

Social robots become more and more prominent in our lives with the developments in the social robotics area. As humans have natural tendencies to form relationships, the human-robot relationship seems inevitable. We will take these relationships and their ethical implications into focus and, we will defend that even though these relationships can have limitations, there are significant findings that provide proof for the benefits of them that should not be ignored. In discussing this subject, we will not ignore the argument of deception and the possible problems that can arise from involving robots in such relationships, we will explain some of these risks. Later, we will be taking all the information into account and argue that these mentioned problems are solvable by careful considerations. In conclusion, we defend the use of robots in areas that will be beneficial for humans.

Even just by looking at the futuristic media, we can see how humans don’t think a future devoid of robots and people are right to think this way. Humanoid robots have been employed in many aspects of our lives; especially in the service industry of developed countries such as Japan (Ihara, 2016). So, the change is already happening as we let more and more of them into our lives.

The stories of crafted beings servicing us have existed for many years; as laborers, as defenders like The Golem in the Jewish folklore, or even as lovers as exemplified in the myth of Pygmalion. And today, we are faced with the possibility of enabling this long-lasting human wonder. Humans have even crafted machines that could serve them in many tasks: The automatons. The word refers to a self-operating machine whose design indicates that it follows an automated function (Meriam & Webster, 2003). The automatons have existed for quite some time. The first instances of them are simple machines such as clocks. Even before electricity, it was possible to see humanoid automatons dating back to the 13th century, like the ones at Hesdin (Truitt, 2010).

However, robots that can be programmed to be capable of learning from experiences are a relatively new phenomenon. With the development of artificial intelligence, we can program robots to learn from novel situations. This opens up new opportunities in human-robot interaction. We’ve come a long way from mechanical-looking robots being limited to doing simple tasks to the point of very human-looking robots assisting us through life, by becoming companions, conducting therapy, etc. Many areas for utilizing these robots are possible ranging from simple labor to inspire us, but even though they provide convenience, a few eyebrows get raised when the topic revolves around the implications of relationships formed between humans and robots.

To begin, we should look deeper into the relevant definitions about forming relationships for a more precise understanding and evaluation of the possible relationships that can be observed between humans and robots. We will discuss the term friendship to determine what can be used to describe the relationships between humans and robots. In his work, Nicomachean Ethics, Aristotle delves into the topic of moral character and obtaining virtue. He puts great emphasis on friendship as he states that it is another essential virtue for one’s life. He construes friendship as a reciprocated goodwill where the source of this goodwill lies in virtue and not in any other source such as the utility for each other or desire for pleasure. Aristotle also adds that for it to be a friendship, shared goodwill for each other is not sufficient alone but awareness of this goodwill and its sources are also required (Aristotle, 2004, 1380b36–1381a21). 

Although they are capable of social interactions, on its face value robots do not have the basic qualifications to be able to form or sustain a friendship. Even if we take their actions that better the welfare of the recipient as symbols of goodwill; robots are not capable of being aware of this goodwill. In addition, it can also be argued that robots are not capable of receiving such goodwill or betterment since they themselves do not feel, think, etc. as humans do. Sherman also touches on this subject as he explains that true friendships are based on continuously expressing one’s goodness to the other and others to the one (1987, p. 594). Based on this definition, it can be stated that friendship or any other relationships that fall under “philia” [1] as Aristotle worded it, does not come straightforward for explaining the relationships between robots and humans or any other relationships with inanimate beings.

However, friendship can be looked at from another view; particularly from a view that focuses on the perceived relations and not necessarily on bounds of reality. In doing so we possibly can overcome the reasons that make it impossible to refer to a relationship formed between an inanimate object and a human as a friendship. Although humans are aware of the facts stating that robots are just machines who are unable to reciprocate any human emotions, they can still go on to feel otherwise. People not only acknowledge the usefulness of robots but they also experience them as social others and then they begin to treat them as more than an inanimate object. Even when it is objectively clear that such animacy does not reside in robots which makes it insufficient to form a friendship described by Aristotle; we believe the discussion should not stop there. We must also think about the perceptions humans can have such as affective mutuality, feeling that they have been cared and even loved by the robot. Throughout this paper, we will be giving examples of such cases exemplifying these perceptions in cases ranging from therapy and nurse robots to army robots fighting alongside soldiers. All of these cases will depict a common thing and that is that humans grow attached to these robots. They do feel a sense of emotional mutuality to the extent that they care for the well being of the robot and act in unnecessary courtesies (de Graaf et. al, 2015; Lammer et. al., 2014; Wada & Shibata, 2007).

Here we begin to question: If people are acting in the way they are, could there be a sense of symmetry formed in their perceptions? If accurate, even if it might not fully qualify to become what Aristotle has described, these relationships do not have to be divorced from the scope of the concept of friendships but rather can form an alternative form. If the human simply feels that not only themselves but also the robot initiates friendship by forming goodwill, the first part of the description is fulfilled.     The second and last part of Aristotle’s description is helping each other in achieving virtue. Literature to this date also have been able to show that this could be the case. Studies employing healthcare robots who were designed to specifically serve humans have shown that when in the company of these robots, human users do well and even thrive in terms of feeling emotionally supported, achieving goals and developing social skills. (Broadbend & Stafford & MacDonald, 2009). And for the robot part of this relationship, when robots that were brought to existence for the purpose of serving humans as an act to fulfill these duties, they become virtuous. Through these relationships, both sides mutually improve. 

Coming back to the definition of friendship; as long as the human can feel their own goodwill is received and accepted while they are getting closer to achieving virtue through the improvement of the self and their well-being, the definition should hold. Therefore, we argue that human perception of the relationship they form with the robots can suffice to be referred to as an alternative form of friendship.

So far, we have talked about how we can refer to some human-robot relationships and mentioned some cases showing that it is indeed possible to form beneficial relationships with these robots beyond those formed with other inanimate beings. To further look into these relationships, it is also useful to mention some features of humans that might allow for and make them yearn for forming such friendships. These features can be discussed in terms of evolutionary and social aspects. Humans are evolved to have social interaction (social brain hypothesis [2]) and they are sensitive to the cues which would initiate social relations (Dunbar, 1998). Social robotics aims to reinforce the belief that they are autonomous yet are able to manage social relations in order to be able to have them put forth for working in fields that require interactions (Biocca et al., 2003). One important factor effective in controlling the interaction process is anthropomorphism.

Anthropomorphism is generally defined as the assigning of human traits to non-human entities or attributing a mental state to non-human objects which is a phenomenon reinforced by our social brains (cited in Damiano & Domouchel, 2018). The concept of anthropomorphism has been seen as a mistake in human cognition, something primitive and inefficient due to false positives. One example of a false positive is thunder getting misconceived as an entity where humans begin to try to appease it (Mitchell, 2005). However, the approach of social robotics is far more different than the classical view explained above; for social robotics, anthropomorphism is a tool that can provide an increase in human-robot interactions.

In the following section of the paper, we will explain how anthropomorphism is achieved and what might be the mechanisms in it that help to ease human-robot relationships. There are several reasons that can lead humans to anthropomorphize robots. Firstly, social robots are physical objects, and their physical presence contributes to our perception of them as entities. Empirical evidence has also shown that humans tended to interact more with the embodied robots whom they can touch on top of seeing it as opposed to robots without bodies (Jung & Lee, 2004). Another reason that causes anthropomorphism is the movement, especially coordinated movement, in agents. Previous literature presents that, humanoid robots are perceived as more friendly by children when their movement range is larger (Asselborn et. al., 2017) and it is possible that through observing coordinated movements a part of the human neural system called mirror neuron system might get activated. Normally this system works in such a way that one doing an action themselves and watching another human do that action creates similar activation. This overlap of activations helps us understand others and even learn from them in an easier manner. However, this system is not limited with human movement but is also seen with non-biologically moving agents where links with biological actions are present (Urquiza-Hass & Kortschal, 2015; Engel et.al., 2008). For this reason, the coordinated movement might show the attribution of human-like traits to artificial agents.

However, there is another phenomenon called the uncanny valley. When a non-human figure’s human-likeness in appearance is seen, it evokes a negative response in humans accompanied with a feeling of eeriness and when human-like appearance is paired with non-human like motion the negative feelings amplify (Mori, 2012; Ürgen et. al, 2013). On the other hand, this issue no longer threatens to completely change the way of designing robots to a way in which anthropomorphism would be fully eliminated. Since the initial findings, the importance of this discovery has been accepted widely and research for ways to overcome this phenomenon has found some promising results. Over repeated interactions, through increased emotional displays and adding other more accurately humanizing features the researchers were able to change the initial negative attitudes evoked in humans (Zlotowski et al., 2015; Koschate et al., 2016).

The reasons above explain some of the ways that might lead humans into anthropomorphizing robots, but we have not yet discussed the outcomes of this action or why do we care about whether we anthropomorphize or not. It has been suggested that similarity increases the empathy responses of humans towards agents (Davis,1996). The case is closely related to social robotics and the question of to what extent people are capable of empathizing with robots and whether they will empathize. According to interviews with soldiers who were working with army robots, they tended to attribute mental states to robots and sometimes even felt sorry for them which lead them to engage in risky behaviors, going as far as making an effort to protect the robots at the cost of putting themselves in danger (Carpenter, 2013). This finding is highly surprising considering that one of the army robot’s most important purpose is to prevent soldiers from risky situations. Empathy response to robots is not limited to the case involving soldier colleagues. In a study where the personification level of robots was manipulated with several variables such as a backstory telling about the experiences of the robot such as growing and changing of its life in terms of emotional and mental development and had a complex story overall. When people read these stories before being asked to destroy the robot, they hesitated more compared to when they were not given personifying stories beforehand (Darling et. al., 2015).

To this point in this paper, we have talked about possible definitions, discussed some of the factors that can aid us in forming relationships beyond a relationship with an inanimate being as well as some results of anthropomorphization. We will now present some more striking cases where these relationships we have been talking about are put to work. One benefit of using the possibility of such relationships to be formed is for therapy purposes. Especially in the treatment of autistic children. Autism is a developmental disorder that can impair an individual’s functioning in many aspects, especially in social situations. However, with early clinical intervention, autistic individuals show signs of improvement (Volkmar et al., 2004). Researchers have thought that the properties of robots would be helpful in a therapeutic setting for autistic children. So, they’ve conducted experiments to see whether they’re as effective therapists as their human counterparts and the results indicate that indeed, they are (So et al., 2019; Wainar et al.; 2010; Robins et al., 2004; Miyamoto et al.; 2005). The results indicate positive growth in autistic individuals in their impaired areas. And though the present robots lack some capabilities like facial expressions due to the difficulty in designing those features accurately in the current state; as technology improves the addition of such capabilities into the robot-therapy is foreseen (Scassellati, 2012).

So, if robots can be used for therapeutic purposes and they’re proven to be helpful, perhaps they should be used. It is true that children may grow attached to these agents and even show affection towards the robots (Kozima et al., 2005). Even though caring about these robots to the extent of showing affection towards them might seem irrational, the clear improvements in these therapies with the robots might not have been possible if such emotional perceptions were not in place. In the future, this can only get better as we start producing robots that are more specified with higher technology. Maybe we can extend the scope and even treat other mental illnesses with the help of these artificial agents. If the goal is to get the patient to maximum well-being with minimal cost, the robots are a great and proven way of doing so.

Another context for using social robots is nursing which is also an area that might seem harder for a robot to do well as it requires empathy, care, and affection. The dynamics of these relationships are inherently two-sided and they cannot follow strict guides as each of these cases require adaptations to individual differences. Nurse robots are seen as simple task finishers who do not have a problem with repeating the same tasks over and over again like their human-counterparts. However, the robots are not yet capable of such adaptive emotional understanding (Locsin & Ito, 2018). On the other hand, the benefits of having robot nurses cannot be ignored; firstly it will serve to keep human nurses out of health risks, also, a robot cannot steal, call sick or act violently if it is not programmed to do so. The trials of robots as nurses with people in need of care showed that there are perceived improvements due to interaction with robots. In these studies the robot nurses were consistently present and reliable, they checked on the patients regularly and provided emotional support through simple facial emotions that appeared on its screen (Wada et.al., 2002).

As it is the case with the nursing robots, to this date, robots are still limited in their abilities to interact (Fischinger et. al, 2013 ). Thus, people in general although might be willing to accept robot companionships, are not accepting of moral responsibilities that come with reciprocal relationships such as friendships. For example, the relationships with robots come with possibilities for replacement of the robot party if it malfunctions without any harm to the human counterpart, which is not something apparent in human-human relationships. Yet, for the most part, such situations will not cause an issue for the purpose of classification of these relationships as friendship alternatives. In cases where one believes the robot to be sufficiently simple such that it does not merit moral consideration at all, forming empathy towards that robot would be very unlikely in which there won’t be grounds for the construction of friendship to cause any moral dilemma. Yet, so far in this paper, we have provided evidence for humans forming bonds with robots that are beyond the bonds formed with inanimate beings even to the extent that they are acting in behaviors observed in human-human interactions. Above, we have shortly mentioned how humans go as far as feeling a reciprocated relation explaining that these bonds might even be referred to as a form of friendship. These bonds even might account for the benefits gained from being accompanied by robots.  

Nonetheless, the felt-reciprocity not being grounded in reality can spark some ethical concerns as it can be argued that these gains can only be consequences of deceiving humans into thinking that they could establish personal relationships with robots where their feelings cannot be literally reciprocated, which could be considered as deception or delusion. Some argue that such failure to comprehend the reality to be a moral issue (Turkle, 2011). However, Coeckelbergh adds that healthy people are informed and aware that the virtual agents and virtual worlds aren’t real. Even from childhood, humans can act as if something is real while they are aware that it is not with the theory of mind abilities (2012). For this reason, the question, as Severson & Clarkson state, comes to the difference between ‘as-if’ and ‘as’ (2010). We believe it is reasonable to assume that the majority of humans are conscious of the fact that robots cannot be real social-others. However, such knowledge may not oversee the subconscious as it does with consciousness. According to Sparrow & Sparrow, the subconscious is in play during interactions between human-robot as it is in between human-human (2006). It is hard to reach an explicit conclusion from this perspective; thus we will likely benefit more by looking at the types of deception and possible consequences.

We can also consider the instances of our daily life in which we are faced with situations that require accepting some deception. A clear example would be the many aspects of our entertainment, we have to suspend our disbelief in order to enjoy various types of media such as video games, fiction genre books or movies. Therefore, we use an intuition pump by ignoring the lesser details of hard-to-follow things and focusing on the important things (Dennett, 1980). Here, the important thing becomes our entertainment, and the fact that it is fiction and therefore “not real” becomes overlooked. This demonstrates that we are not foreigners to the concept of deception at all and this can also apply to relationships. If humans are able to gain what they want from the relationship, it is very possible that they might ignore the robot aspect of their companion.

Humans letting deception into their lives doesn’t end with entertainment or as a way to acquire pleasure. Humans also act deceivingly on a regular basis, to the extent that we find it acceptable to deceive in some social situations. These situations can be hiding emotions to protect another’s feelings, to maintain harmony within society and other motives that might serve an ulterior good (Gnepp et. al, 1986). Because of this, viewing deception as an inherently bad concept would not be fair, since we are again judging robot-human relations within the same social contexts. As long as robots are there to enhance the well-being of humans; if it exists, deception should again be acceptable.

One might argue for strict positions such as Kantians who wouldn’t approve of any deception in their lives so that they wouldn’t be welcoming of this concept of deception as well. While the deception argument accounts for the mainstream population, it doesn’t satisfy those with strict Kantian moral guides. However, these relationships are still maintainable if it’s looked from a virtue perspective. We’ve already argued that robots are virtuous beings by creation, so a relationship between them and us would be even more valuable than a common relationship in which the virtue of the parties cannot reach such a predetermined level. Therefore, according to a Kantian perspective, being friends with someone who is inherently virtuous would promote the other party to promote valuable and worthy ends (Jeske, 1997). This level of friendship would be the most impartial as robots, who are devoid of human irrationality would be the most objective companions one can ever have.

The possible issues do not end with possible deception; Social robots are designed to serve humans, therefore, social robotics aims to enable them to engage with humans as much as possible. However, this situation, if not handled carefully, might be detrimental for humans besides all possible positive outcomes expected. One possible problem can come from the information shared with artificial agents: People voluntarily or involuntarily will share a part of their life, as the robots are planned to interact with humans on a daily basis. On the other hand, persons or companies will be active in the process and the degree to which they have access to information collected by these robots can cause ethical concerns. These concerns should be carefully examined and answered by social robotics communities. Possible data protection arrangements would be beneficial to overcome such problems. Another issue is the vulnerability of humans to manipulation. What if these artificial agents start to advertise a product in the conversation while one thought they were having a normal conversation with a friend? Even more concerning, there could be cases where something you developed emotional attachment/empathy might require you to pay significant amounts of money for it to not be taken or for it not to abandon you? How destructive that would be, especially for those who require special care; like children and ill people. It is important to promote concerns about the manipulation issue and raise a request for extending the legal protection of users of this technology. As explained, careful examination and law force would prevent the issues mentioned above to occur.

In this paper, we’ve established that humans have a likelihood of feeling for robots due to concepts like anthropomorphism. Then we’ve stated proven cases of robot-human relationships being beneficial in the current world. However, whether these relationships can only be made possible through the means of deception or delusion has been a controversial discussion topic as any misconception of reality raises ethical concerns. Despite this notion, a thorough evaluation of the benefits and possible problems it’s clear that the benefits outweigh the costs if well managed. On top of this, the discussed problems regarding emotional, economical safety and data privacy can be solved with mindful consideration. Therefore, it is for the better that the robots continue to be a part of human life.


[1] Philia is one of the four Greek words used to describe love. In his books VIII, IX of Nicomachean Ethics, Aristotle gives examples of philia; young lovers (1156b2), lifelong friends (1156b12), cities with one another (1157a26), political or business contacts (1158a28), parents and children (1158b20), fellow-voyagers and fellow-soldiers (1159b28), members of the same religious society (1160a19), or of the same tribe (1161b14), a cobbler and the person who buys from him (1163b35).

[2] Social brain hypothesis was proposed by British anthropologist Robin Dunbar, to explain the large brain size in humans compared to their body size. Dunbar suggests that the cognitive demands of sociality place a constraint on the number of individuals that can be maintained in a coherent group and human brains have evolved to manage their unusually complex social systems (Dunbar, 2009).

References

Aristotle, Thomson, J. A. K., & Tredennick, H. (2004). The Nicomachean Ethics. London: Penguin.

Asselborn, T., Johal, W., & Dillenbourg, P. (2017). Keep on moving! Exploring anthropomorphic effects of motion during idle moments. 2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN).

Berryman, S. (2003). Ancient automata and mechanical explanation. Phronesis, 48(4), 344-369.

Biocca, F., Harms, C., and Burgoon, J. K. (2003). Toward a more robust theory and measure of social presence. Presence 12, 456–480.

Broadbent, E., Stafford, R., & Macdonald, B. (2009). Acceptance of healthcare robots for the older population: review and future directions. International Journal of Social Robotics, 1(4), 319–330.

Carpenter, J.  (2013). The quiet professional: an investigation of U.S. Military explosive ordnance disposal personnel interactions with everyday field robots. (Doctoral Dissertation). University of Washington.

Coeckelbergh, M. (2012). Care robots, virtual virtue, and the best possible life. In The good life in a technological age (pp. 299-310). Routledge.

Damiano, L., & Dumouchel, P. (2018). Anthropomorphism in human–robot co-evolution. Frontiers in Psychology, 9.

Darling, K., Nandy, P., & Breazeal, C. (2015). Empathic concern and the effect of stories in human-robot interaction. 2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN).

Davis, M. H., & Empathy 2nd, A. (1996). A social psychological approach. Boulder.

Dennett, D. (1980). The milk of human intentionality. Behavioral and Brain Sciences, 3(3), 428-430.

Dunbar, R. I. (1998). The social brain hypothesis. Evolutionary Anthropology: Issues, News, and Reviews: Issues, News, and Reviews, 6(5), 178-190.

Dunbar, R. (2009). The social brain hypothesis and its implications for social evolution. Annals of Human Biology, 36(5), 562–572.

Engel, A., Burke, M., Fiehler, K., Bien, S. and Rösler, F. (2008). How moving objects become animated: The human mirror neuron system assimilates non-biological movement patterns. Social Neuroscience, 3(3-4), pp.368-387.

Fischinger D, Einramhof P, Wohlkinger W, Papoutsakis K, Mayer P, Panek P, Koertner T, Hofmann S, Argyros A, Vince M, Weiss A, Gisinger C (2013). Hobbit: the mutual care (Workshop paper). In: International conference on intelligent robots and systems. Tokyo, Japan.

Gnepp, J., & Hess, D. L. (1986). Children’s understanding of verbal and facial display rules. Developmental Psychology, 22(1), 103–108. doi: 10.1037//0012-1649.22.1.103

Graaf, M. M. D., Allouch, S. B., & Dijk, J. A. V. (2016). Long-term evaluation of a social robot in real homes. Interaction Studies Interaction Studies. Social Behaviour and Communication in Biological and Artificial Systems, 17(3), 461–490.

Ihara, T. (2015). Androids invade Japan’s service industry. Nikkei Asian Review. Web. https://asia.nikkei.com/Business/Technology/Androids-invade-Japan-s-service-industry. Date of access: 20 November 2019.

Jejunum. (2003). In Merriam-Webster’s dictionary (11th ed.). Springfield, MA.

Jeske, D. (1997). Friendship, virtue, and impartiality. Philosophy and Phenomenological Research: A Quarterly Journal, 51-72.

Jung, Y. and Lee, K. M. (2004). Effects of physical embodiment on social presence of social robots. Presence 2004: The Seventh International Workshop on Presence: Spain.

Koschate, M., Potter, R., Bremner, P., & Levine, M. (2016, March). Overcoming the uncanny valley: Displays of emotions reduce the uncanniness of humanlike robots. In The Eleventh ACM/IEEE International Conference on Human Robot Interaction, 359-365.

Kozima, H., Nakagawa, C., & Yasuda, Y. (2005). Interactive robots for communication-care: A case-study in autism therapy. In ROMAN 2005. IEEE International Workshop on Robot and Human Interactive Communication, 2005. 341-346.

Lammer L, Huber A, Weiss A, Vincze M. (2014). Mutual care: how older adults react when they should help their care robot. In: AISB workshop on new frontier in human-robot interaction, London, UK.

Locsin, R. C., & Ito, H. (2018). Can humanoid nurse robots replace human nurses? Journal of Nursing, 5(1), 1. 

Mitchell, S. D. (2005). Anthropomorphism and cross-species modeling. Thinking with Animals, eds L. Daston and G. Mitman, Columbia University Press: New York, 100–118.

Miyamoto, E., Lee, M., Fujii, H., & Okada, M. (2005). How can robots facilitate social interaction of children with autism?: Possible implications for educational environments. In Proceedings of the 5th International Workshop on Epigenetic Robotics: Modeling Cognitive Development in Robotic Systems. 145–146.

Robins, B., Dickerson, P., Stribling, P., & Dautenhahn, K. (2004). Robot-mediated joint attention in children with autism: A case study in robot-human interaction. Interaction Studies, 5(2), 161-198.

Scassellati, B., Admoni, H., & Matarić, M. (2012). Robots for use in autism research. Annual review of biomedical engineering, 14, 275-294.

Severson, R.L., Carlson, S.M. (2010) Behaving as or behaving as if? Children’s conceptions of personified robots and the emergence of a new ontological category. Neural Netw, 23, 1099–1103

Sparrow, R., Sparrow, L. (2006) In the hands of machines? The future of aged care. Mind Mach 16,141–161

Truitt, E. (2010). The Garden of Earthly Delights: Mahaut of Artois and the Automata at Hesdin. Medieval Feminist Forum, 46, 74. 

Turkle, S., (2011). Alone together: why we expect more from technology and less from each other. Basic Books, New York

Urgen, B.A., Plank, M., Ishiguro, H., Poizner, H. and Saygin, A.P. (2013). EEG theta and Mu oscillations during perception of human and robot actions. Frontiers in Neurorobotics, 7, 19-32.

Urquiza-Hass, E.G., and Kortschal, K.(2015).The Mind behind Anthropomorphic Thinking. Anim. Behav., 109, 167–176.

Volkmar, F. R., Lord, C., Bailey, A., Schultz, R. T., & Klin, A. (2004). Autism and pervasive developmental disorders. Journal of Child Psychology and Psychiatry, 45(1), 135-170.

Wada, K., & Shibata, T. (2007). Living with seal robots—its sociopsychological and physiological ınfluences on the elderly at a care house. IEEE Transactions on Robotics, 23(5), 972–980. 

Wada, K., Shibata, T., Saito, T., & Tanie, K. (2002). Robot-assisted activity for elderly people and nurses at a day service center. Proceedings 2002 IEEE International Conference on Robotics and Automation, 92(11), 1780 – 1788.

Wainer, J., Dautenhahn, K., Robins, B., & Amirabdollahian, F. (2010). Collaborating with Kaspar: Using an autonomous humanoid robot to foster cooperative dyadic play among children with autism. 2010 10th IEEE-RAS International Conference on Humanoid Robots

Zlotowski, J. A., Sumioka, H., Nishio, S., Glas, D. F., Bartneck, C., & Ishiguro, H. (2015). Persistence of the uncanny valley: The influence of repeated interactions and a robot’s attitude on its perception. Frontiers in psychology, 6, 883.