Suche

» erweiterte Suche » Sitemap

Natur / Technik

Sebastian Gieselmann

Development of a robotic system to design and evaluate Dynamic Background Cues

ISBN: 978-3-95425-246-6

Die Lieferung erfolgt nach 5 bis 8 Werktagen.

EUR 59,50Kostenloser Versand innerhalb Deutschlands


» Bild vergrößern
» weitere Bücher zum Thema


» Buch empfehlen
» Buch bewerten
Produktart: Buch
Verlag:
disserta Verlag
Imprint der Bedey & Thoms Media GmbH
Hermannstal 119 k, D-22119 Hamburg
E-Mail: info@diplomica.de
Erscheinungsdatum: 09.2013
AuflagenNr.: 1
Seiten: 272
Abb.: 66
Sprache: Englisch
Einband: Paperback

Inhalt

When predicting the technological development of assistive systems in everyday environments, it may be assumed that the future home will contain more electronic devices than it does today. We already have a great deal of technology and even robots in our homes. For example, vacuum cleaner robots are used to clean floors fully autonomously. People are becoming more accustomed to robots. Even the elderly are learning to interact naturally with the seal robot Paro. However, problems may emerge as robots become more and more human-like. But, as robots are developed to address more complex tasks, i.e. for serving as social interfaces or housekeeping, it becomes more reasonable to design human-like robots. This shape has the advantage that the interaction becomes more natural. But, it is exactly this human-like shape that often evokes fear and discomfort. When the robot becomes too human-like or too powerful and intelligent, especially people in the western world become scared. This vision of what robots could be or do in the future can be seen in literature or movies where authors express the fear that robots could become dangerous for humans, as seen in the ‘Terminator’ movies or in Isaac Asimov’s ‘I, Robot’. This leads people to have an incorrect view of what ‘real’ humanoid robots are like. In order to reduce these fears, it has become important to convince people that robots are friendly and that it is pleasant to interact with them. Therefore, this book aims to evaluate the effects of social behavior of robots onto the human observer. Is it possible to let a robot appear more social and less frightened by just changing the way the robot moves? Is it possible to improve the performance of a cooperative task of a human and a robot by using human-like cues? The research is based on a structured investigation of the influence of several movement patterns like breathing, blinking and natural movements. Further, an extensive interaction study was done whereby biophysiological measurements were used in order to evaluate the reaction of the human body onto the robot whole body behavior.

Leseprobe

Text Sample: Chapter 2.2.2, The Mindlessness Approach: Beside the anthropomorphism there is another approach that tries to explain why humans react to machines in a social way. Nass et al. [115][127] postulate that humans mindlessly react socially to non-living objects. They disagree with the anthropomorphism approach, arguing that the theory cannot explain the observed effect. During their studies they found out that humans mindlessly apply social rules and expectations they learned during HHI to computers. One explanation for this might be that humans intuitively react to input they receive from the outside world independently to whether it was conveyed by a human or a computer. Furthermore this reaction is produced subconsciously and is thereby an automatic process that responds very quickly. Their main argument against anthropomorphism is that humans know that they are interacting with a machine. When people anthropomorphise, they ascribe human traits to an object. According to Nass, this is not possible because they know that the object in only a machine. Instead, people use a type of hard wired set of social rules that are triggered when certain stimuli appear. One of the main experiments in [115] examines how humans evaluate the performance of a computer. In the first condition the participants answered the questionnaire on the same computer they were judging. In another condition the questionnaire was filled out on a computer different from the one they were evaluating. In the first condition the rating of the computer was significantly better than in the second. This supports the idea that humans show politeness towards a computer when they rate it face-to-face’ as they normally do when rating a human in a face-to-face situation. An alternative way of explaining this effect is given by Dennett [40] who says that individuals frame the interaction with a computer as an interaction with the programmer and thereby address their reactions to the human behind the machine’. Hoffmann et al. [75] extended this research and tested the same effect on an embodied conversational agent and observed the same effects. Another study [116] associated with the mindlessness approach showed that the similarity-attraction hypothesis was also applicable to computers. The authors argue that people prefer interactions with other people with same personalities. They showed that this is also true for computers. 2.2.3, Form and Function: Independently of whether anthropomorphism or the mindlessness approach is preferred it is widely accepted that humans react socially to robots and that this effect can be strengthened when adequate features are used. In the following it will be explained how these features should be designed to improve interaction. Especially in the field of product design there are a lot of investigations about how objects have to be designed to make them usable and more attractive to consumers. The main argument used in design is that form should suggest function [41]. For example, the spoon seen in Figure 2.3 has a human like shape and therefore feed. The existence of feed suggests that this product can stand upright. If humans see a feature that reminds them of a function, especially anthropomorphic features, they associate a certain function. Designers use this effect to guide users through an object's features to improve its usability. Another example are the signs use in robotics. If a robot has some kind of eyes a human observer will imagine that the robot is able to see and to recognize its surrounding. If the robot has facial features such as mouth and eyebrows people will expect that the robot can display for example emotion. This is the reason why, when designing robots, it is important to pay attention to the features that are presented. In other words features that evoke expectations that cannot be fulfilled should simply not be there [82]. Furthermore one should make sure that signs are not ambiguous, and do not refer to different functions at the same time. If for example a robot is not moving this could mean that the robot is turned off, has a malfunction or is just not moving at all. A human standing in front of such a robot would not know if it is possible to interact with it because the signs evoke conflicting or inaccurate expectations [146]. Nowak et al. [118] formulated this fulfilment problem more dramatically. They tested virtual avatars that differed in their level of anthropomorphism but had the same competences and functions. They found that during interaction participants were less engaged when facing very human-like avatars. The authors suggest that the cause might be the high expectations they have towards the avatar which are then not fulfilled. An extensive study was conducted by Duffy where he postulated that there should be a strong correlation between the robot's capabilities and its form and vice versa’ ([43] p.185). He also collected some interesting strategies to cope with this problem. For example he proposed the use of natural motion as did Disney [148] to give characters a personality. Additionally he suggested the use of social communication conventions in function and form’ ([43] p.185) and to avoid the Uncanny Valley. To achieve this, the robot should use communicational pattern adapted from humans and be as much as human-like without reaching the Uncanny Valley for example by using a more artificial design for a humanoid robot. Function is not all that is ascribed when appraising the form of a robot. Goetz et al. [59] found out that people also associate tasks to the appearance of a robot. For example human-like robots are preferred for social tasks such as museum tour guide and machine-like robots are more easily visualized as security guards. Robots that are more serious are more suitable for serious jobs like a personal fitness trainer. This makes it clear that it is important to design the robot in a way that fits the task it will carry out. It would be inappropriate to use a 2 meter tall robot to educate children. The appearance of a robot is very important in guiding a human user. If these features are used well they can be used to make the robot more usable and intuitive, thus avoiding frustration from unfulfilled expectations. 2.2.4, Conclusion: To sum up this section, it can be said that robots are perceived very differently depending on nationality, previous knowledge and personality. However, some effects seem to be the same for most humans. Although, sometimes people are frightened by robots, most of the time they anthropomorphise them and react to them in a social way. The key point is to find a balance between certain aspects when designing a robot [43]. The robot has to be anthropomorphic enough to be familiar to the human but not so much as to be uncanny. The level of anthropomorphism can be changed by providing different anthropomorphic features that can give humans guidance about the features of the robot, and reduce uncertainty towards this unknown technology. Therefore it is important to produce good quality features to gain the best effect, and avoid misinterpretations. On the other hand, these features should fit the overall function and tasks the robot should do. Features, especially anthropomorphic ones, evoke expectations that have to be fulfilled by the robot. If they are not, it may make the robot uncanny or influence interaction in a negative way. If all this is taken into account human-like features could be a great opportunity to improve HRI. Breazeal states [17] that interacting with a social robot requires no additional training because humans already know how to interact in a social way. The features suggested for improvement in this thesis are described in chapter 6. It can be said that communication between humans and robots follows similar rules. In both cases communication is done by sending and receiving signs that influence the interaction partner. The human sees the robot and starts interpreting intentions and traits to the robotic system the same as they would during HHI. Because of this it is possible to use the same definitions of signals and cues in Human-Robot Interaction. This does not mean that in HRI the same signals and cues are used as in HHI. These issues should be investigated separately as it has already been done for many signs in the fields of emotion, behavior patter, proxemics and so on.

Über den Autor

Sebastian Gieselmann began his studies in Applied Informatics in the Natural Sciences (diploma) at the University of Bielefeld, in 2003. Early, he has focused on robotics and has developed a special interest in human robot interaction. After writing his diploma thesis with focus on augmented reality in 2008, he got the opportunity to continue his studies in the field of social robotics. In 2013, the author finished his PhD-Thesis at the hybrid society group which is part of the CoR-Lab research group at the University of Bielefeld.

weitere Bücher zum Thema

Bewerten und kommentieren

Bitte füllen Sie alle mit * gekennzeichenten Felder aus.