PRE2020 3 Group12
Robot Appearance
Group Members
Name | Student ID |
---|---|
Bart Bronsgeest | 1370871 |
Mihail Tifrea | 1317415 |
Marco Pleket | 1295713 |
Robert Scholman | 1317989 |
Jeroen Sies | 0947953 |
Planning
Week 2 | Finish literature study + prepare list of topics to be included in the wiki | |
Week 3 | Define our goals for simulating human behavior | |
Week 4 | Prepare footage for the questionnaire |
Bart:
Jeroen, Marco & Robert:
Marco:
Mihail:
|
Week 5 | Send out questionnaire |
Everyone:
Jeroen, Marco & Robert:
Robert:
Bart:
Mihail:
Marco:
Jeroen:
Jeroen & Robert:
|
Week 6 | Wiki filled as much as possible + start on data analyzing/results | |
Week 7 | All data processed + wiki finished | |
Week 8 | Final presentation |
Introduction
Probleem schetsen, eventueel mogelijke oplossingen aanstippen? Waarom het relevant is, waarom we er uberhaupt een oplossing voor nodig hebben Marco
For the longest time in human history, humans have seized every opportunity they could find to automate and make their lives easier. This already started in the classical period with a very famous example, where the Romans would build large bridge-like waterways, aqueducts, to automatically transport water from outer areas to Rome, automated by the force of gravity and the general water cycle of nature. Today, computational sciences opening ways for AI and robotics have opened many new opportunities for this automation. Think about all the robots that are used in production to do the same programmed task over and over again, classification of images (often better than human results) with deep learning networks, and more recently, the combined robotics and AI task of self-driving cars!
Slowly but surely, robots can take over repetitive, task-specified jobs, and do them more efficiently than their human counterparts. Currently, however, such jobs are very concrete. For robots to have more versatility, they simply need to become more like us. Robots should be adaptive, be able to learn from their mistakes, and handle anomalies efficiently; essentially, a robot should have a level of decision-making and freedom equivalent to that of a human being. For example, when a robot is specifically tasked with picking up a football from the ground, it should find out a way to get the football back when it's accidentally thrown in a tree. Another issue arises in this social situation as well: the robot has to have a certain level of flexibility in its movement, since it should not just be able to move around and pick up a ball, but also to use tools that help it in fulfilling its "out of the ordinary" task. All in all, one thing becomes very clear: These tasks can not be hardcoded, they have to be learned.
It stands to reason that building a robot with all implementations specified above, a robot suitable in a social environment, requires a humanoid design. This causes many challenges, among which the most notable would be the challenges grouped into Electrical/Mechanical Engineering and Computer/Data Science. A human has many joints and muscles for various movements, expressions, and goals. The human face in particular is an incredibly complex design, consisting of 43 muscles, using about 10 of those muscles to smile and about 30 to simply laugh. All these joints and muscles have to work together perfectly since every joint's state influences the entire structure of joints and muscles. One huge aspect in the collaborative work of these joints and muscles is our sense of balance, meaning how the robot prevents itself from falling over. At every iteration, the robot has to check the state of its balance and decide which joints and muscles need to be given a task for the next iteration. Additionally, there is the Computer Science-related challenge of mapping such an unpredictable environment to binary code interpretable by the robot. Every signal from its surroundings has to be carefully processed and divided accordingly. The robot has to learn, diving into AI, and more specifically, Reinforcement Learning. Reinforcement Learning currently only exists as a solution to very basic learning problems, where the rules are very concretely specified, and suffers from the environment being too broad in terms of data. This is why it is still a very experimental field, and many robot studies include abstract definitions of states and policies for environment data that heavily imply the usage of Reinforcement Learning in theory, but lack a practical implementation.
As a consequence, social/humanoid robots are still a widely researched topic. The research has for example huge usage potential in care facilities and other forms of social care for the elderly, people with a disability, or even in the battle against loneliness and depression. Many papers cover how the robot could be taught certain skills with respect to the environment (Reinforcement Learning) and how it would communicate with humans. However, an oftentimes overlooked question is whether this robot would be accepted by humans at all. Many studies have found that a robot in the shape of a cute animal -a more easily modeled robot than a humanoid robot because humans expect less from the robot and the mimicked behavior is often very repetitive and standard- has a positive impact on its environment. People like engaging with the robot, as some see it as a kind of pet, and it successfully requests the attention of its audience. But, as we have established, such a robot would be very limited in the tasks it could fulfill, and modeling a humanoid robot is not as easy as modeling the behavior of an animal because we expect a lot more detail from the humanoid robot being humanoid beings ourselves. What would this robot look like to make it appealing instead of scary?
This phenomenon, a robot becoming scary as it approaches human-likeness, is what is referred to as the Uncanny Valley. Humans are able to relate to objects that act and look in a certain human way (generally humanoid), but something strange happens when we approach reality. If a robot or screen-captured character is manually made to look exactly like a human, we are often scared, disgusted, and 'uncanny' of it. It's as if we hate that it tries to fool us for being human, when we know it isn't from the various signs we pick up on as red flags. Because modeling a perfect human is impossible with the current state-of-the-art, designers try to refrain from creating a very human-like robot even if it has to be humanoid to properly fulfill its tasks. However, in a social situation, a robot has to be appealing enough for us to appreciate, value, and even accept it in our environment. On the other hand, though, if the robot is too 'cute', we tend to not open up to the robot at all, but rather use our attention to watch it, actively 'awwh' it, and help it in any way we can; not very useful when the purpose is to help you, although this can be helpful in battling loneliness.
Since most of this technology is still very experimental, it is important that we already start to define the variables involved in determining this Uncanny Valley with the examples we can already offer. Such examples are already widely available, not only due to attempts in developing human robots, but because of the art of recreating a human face in CGI (Computer Generated Imagery). By defining variables for various attributes of the appearance of a robots and its results on humans, or even just elderly, adults and students, we can help designers fall into the same pitfalls that others already have. Designers can decide on the appearance of a robot based on what exactly they need the robot to be able to do: a center of appearance studies.
Relevance of robot appearance
In a social context, robots may be subject to judgement from humans based on their appearance Walters, M.L., Syrdal, D.S., Dautenhahn, K. et al. Avoiding the uncanny valley: robot appearance, personality and consistency of behavior in an attention-seeking home scenario for a robot companion. (2008). The perceived intelligence of the robot is correlated to the attractiveness of the robot since it is the case that humans make a ‘mental model’ of the robot during social interaction and adjust their expectations accordingly:
- “If the appearance and the behavior of the robot are more advanced than the true state of the robot, hen people will tend to judge the robot as dishonest as the (social) signals being emitted by the robot, and unconsciously assessed by humans, will be misleading. On the other hand, if the appearance and behavior of the robot are unconsciously signaling that the robot is less attentive, socially or physically capable than it actually is, then humans may misunderstand or not take advantage of the robot to its full abilities.”
It is thus very important to predict and attribute the correct level of attractiveness depending on the intellectual capabilities of a robot. Not only that, but humans attribute different levels of trust and satisfaction when dealing with robots, depending on how much they like it Li, D., Rau, P.L.P. & Li, Y. A Cross-cultural Study: Effect of Robot Appearance and Task. (2010). Furthermore, an anthropomorphic robot is said to be better when high sociability tasks are required Lohse M et al (2007) What can I do for you? Appearance and application of robots. In: Artificial intelligence and simulation of behaviour, which is a statement that does not lack controversy as some research did not find any conclusive evidence of this aspect Li, D., Rau, P.L.P. & Li, Y. A Cross-cultural Study: Effect of Robot Appearance and Task. (2010).
Study objective
As many have regarded the uncanny valley the pitfall of humanoid robots, there is the necessity of figuring out how the appearance should change in order to climb this valley. The answer to this research question is rather simple, namely: to become indistinguishable from humans. But the problem of this simple answer is that there is no way to quantify the similarity in appearance to the one of humans other than measuring the uncanniness of the robot using humans and surveys. Better ways need to be researched in order to predict the similarity of a robot to the one of a human, based on robot features. For this, we intend to come up with a scoring system that can accurately predict where the robot sits on the uncanny valley. We are interested in computing this score using latent variables inferred from our research.
Theoretical Exploration
State of the Art
Atlas
The Atlas robot made by Boston Dynamics with the purpose of accomplishing the tasks necessary for search and rescue missions has increased in popularity in the public eye especially for its similarity in appearance to a human. The capability of this robot, as showcased in the YouTube videos released by the manufacturer, namely Boston Dynamics, accomplishes a set of physical tasks such as jumping, running in uneven terrain, and acrobatic postures such as a handstand and a somersault. The 150 cm tall, weighing 80kg, robot is able to maneuver obstacles with human like mobility. In the past couple of years the maneuverability of the robot has increased substantially and supposedly resembles movements of professional athletes. The robots has 28 hydraulic joints, which it uses to move fluidly and gracefully. These joints are moved using complex algorithms that optimize the set of joints that to be moved and to what degree to reach a target state. First, an optimization algorithm transforms high-level descriptions of each maneuver into dynamically-feasible reference motions. Then Atlas tracks the motions using a model predictive controller that smoothly blends from one maneuver to the next.” Boston Dynamics wrote.
The similarity to the human body is thus only achieved by its structure and movement. Although, it is worth mentioning that defining human body components such as a round head, a face, together with hands are missing components from this robot. That is a reason why we estimate that this robot sits right at the left side of the uncanny valley.
Honda ASIMO
The ASIMO (Advanced Step in Innovative Mobility) robot produced by Honda, was developed to achieve the same physical abilities as humans, especially walking. This humanoid robot, unlike Atlas, comes equipped with hands and a head, and its structure is more like the one of a human. Also, ASIMO has the ability to recognize postures and gestures of humans, moving objects, sounds and faces and its surrounding environment. Its surroundings are captured by two camera's located in the "eyes" of the robot, this all allows ASIMO to interact with humans. It is capable of following or facing a person talking to it and recognizing and shaking hands, waving and pointing. ASIMO can distinguish between different voices or sounds, therefore it is able to identify a person by its voice and therefore successfully face the person speaking in a conversation. ASIMO's advanced level of interaction with users is the reason why we think that this robot is situated higher than Atlas when it will come down to user preference.
Geminoid DK
Geminoid DK robot was conceived with the purpose of pushing the state-of-the-art robot imitation of humans. By creating a surrogate of himself, the creator was able to achieve remarkable human features that would otherwise be hard to create without a base model. With a facial structure almost indistinguishable from the human one, only movement remains a factor that plays a role in the uncanniness of this robot. Thus, we predict that by analyzing this model, we will achieve some conclusions with regards to human movement in robots. The robot mimics the external appearance and the facial characteristics of the original, being its creator and a Danish professor Hendrik Scharfe. Apart from the movements in the robot’s facial expressions and head it is not able to move on it’s own (own = remotely by the operator). The Geminoid DK does not possess any intelligence of itself and has to be remotely controlled by an operator. Pre-programmed sequences of movements can be executed for subtle motions such as blinking and breathing. Moreover, speech of the operator can be transmitted through the computer network of the geminoid to a speaker located inside the robot. At this moment the geminoid DK is used to examine how the presence, the appearance, the behavior and the personality traits of an anthropomorphic robot affects the communication with human partners.
Human Perception of others
Human beings interact which each other on a very frequent basis. These interactions can be partially explained by using psychological, social-psychological, and behavioral theories that have been developed over the years. It is of utter importance to understand how exactly humans interact because many robot developers aim to create robots that are as close to an actual human as possible. Understanding how humans interact with computers and/or technology is not enough, there is an important human factor that needs investigation.
Human Robot Interaction (HRI)
Robotics integrates ideas from information technology with physical embodiment. They obviously share the same physical spaces as people do in which they manipulate some of the very same objects. Human-robot interaction therefore often involves pointers to spaces or objects that are meaningful to both robots and people. Also, many robots have to interact directly with people while performing their tasks. This raises a lot of questions regarding the 'right way' of interacting.
The United Nations (U.N.) in their most recent robotics survey (U.N. and I.F.R.R., 2002), grouped robots into three major categories, primarily defined through their application domains: industrial robotics, professional service robotics and personal service robotics. The earliest robotics belongs to the industrial category, which started in the early 1960, then much later, the professional service robots category started growing, and at a much faster pace than the industrial robots. Both categories manipulate and navigate their physical environments, however professional service robots assist people in the pursuit of their professional goals, largely outside industrial settings. For example robots that clear up nuclear waste contribute to that category. The last category promises the most growth. These robots assist or entertain people in domestic settings or in recreational activities.
The shift from industrial to service robotics and the increase of the number of robots that work in close proximity to people, raises a number of challenges. One of them is the fact that these robots share the same physical space with people. Which could be professionals trained to operate robots, but also children, elderly or people with disabilities whose ability to adapt to robotic technology may be limited.
One of the key factors in solving these challenges is autonomy. Industrial robots are mostly designed to do routine tasks over and over again and are therefore easily programmed. Sometimes they require environmental modifications, for example a special paint on the floor that helps them navigate properly, which is no problem in the industrial sector, though for service robots this is much harder to accomplish. In that case such modifications are not always possible, which requires the robots to have a higher level op autonomy. Next to that, these robots tend to be targeted towards low-cost markets, which results in much more difficulty endowing it with autonomy. For example, the robot dog shown in the figure is equipped with a low-resolution CCD camera and an onboard computer whose processing power lags behind most professional service robots by orders of magniture.
Godspeed and RoSAS Scale
Godspeed Scale
Because the aim is to determine how a robot is rated on a set of items, and how movement alters these ratings, it is of critical importance to find a scale that measures this accurately. Luckily, many different scales exist for these purposes. One of such scales is the Godspeed scale, developed by Bartneck, C. et al. (2009). This scale aims to provide a way to assess how different robots score on the following dimensions: anthropomorphism (1), animacy (2), likeability (3), perceived intelligence (4) and perceived safety (5). Each of these dimensions has a set of items associated, on which participants can rate the robot in question. Each item is presented in a semantic differential format. Examples of these items are: artificial-lifelike, mechanical-organic, unkind-kind, irresponsible-responsible and agitated-calm. Participants are asked to evaluate to which extent all of these items apply to the robot using a 5-point likert-scale (Bartneck, 2009).
Godspeed Scale Critiques
Even though this is a widely used tool for assessing robot appearance, there has been some critique. Some scholars have argued that there are quite a few issues regarding this scale, and that it needs further investigation. Here we will go over two main issues with the Godspeed scale, as discussed by Carpinella (2017), as well as .
First of all, sometimes the items do not load onto the dimensions like expected. For example, according to the godspeed scale the item fake-natural is supposed to load onto the anthropomorpism dimension, but research has shown that this is not always the case. This means that the fake-natural item might not contribute much to the score of the anthropomorphism dimension, and that it could be statistically useless. On the other hand, sometimes items load onto factors that they are not supposed to load onto. The item inert-interactive loading onto the perceived intelligence dimension is an example of such a situation. According to the Godspeed scale it is not supposed to be on there, yet research tells us that it might (Carpinella, 2017). Situations like these suggest that the dimensions and their corresponding items might need further investigation and/or tweaking.
Secondly, the scale uses a semantic differential response format, which means that the items used to rate the robots have two extremes, instead of just being one word. This makes sense when an item uses antonyms as endpoints, like unpleasant-pleasant, yet in some situations this is not the case (Chin-Chang & MacDorman, 2010). It can be argued, for example, that the item awful-nice contains to endpoints that are not direct antonyms of eachother. This might be evidence that the Godspeed scale is not as exact as one might think.
RoSAS: Robot Social Attribute Scale
Building on the work of Carpinella (2017), Bartneck et al. (2017) developed a new scale called the Robot Social Attribute Scale, or RoSAS in short. This scale is based on extensive research and has its roots in the Godspeed scale. There are three main differences between the Godspeed scale and the RoSAS.
Firstly, the RoSAS tests robot appearance on three dimensions instead of five: warmth, discomfort and competence. These three dimensions are derived from the Godspeed scale, but should be more reliable and robust (Bartneck et al., 2017). These dimensions are measured using six items per dimension, which can be seen in table 1:
Secondly, instead of presenting the items in a semantic differential response format, the items in the RoSAS scale are simply one word. Participants are asked to which extent these words they associate to the robot.
Lastly, the RoSAS scale has a total of 18 items, whereas the Godspeed scale has 24.
Method
All participants will be asked to fill out the questionnaire. In this questionnaire, subjects will see an image of a robot, and a video of the same robot moving, for three different robots. These robots fall into three different categories: slightly humanlike, humanlike and extremely humanlike. These three categories will be illustrated by the following robots: Atlas (Boston Dynamics), Honda ASIMO (Honda), and Geminoid DK (Aalborg University, Osaka University, Kokoro & ATR) // Sophia (Hanson robotics). To ensure that confounding variables like habituation to the robot have a minimal effect on participant judgement, the order of presentation is counterbalanced. This means that half of all participants will first see the image, then the video (normal order), and the other half will first see the video, then the image (reversed order). The order in which the different robots are presented is randomized. For a visual representation, please refer to table 1.
Subject responses will be measured using the Robotic Social Attributes Scale (RoSAS), developed by Carpinella, M. et al. (2017). Each robot will be rated on its warmth, competence and discomfort. Each of these three factors contains six items on which the robot will be evaluated.
Enterprise
Introduction
In the past years, researchers, movie makers and robot designers have tried to get closer to the 'manufactured human.' But, as we know, going up does not mean making progress; as we try to approach a robot or other entity imitating a human, we are victim to our own technological limits. Especially in movie-making, designers can model, texture and rig characters to share the same joints as the actor, have it follow the actor's movements, and as a result, get a realistic character in the movie. One of the best examples of this is probably Avengers: Infinity War, a movie based around a character that did not even exist, named Thanos. However, the performance of the actor (Josh Brolin) was perfectly captured by the computer-generated model of Thanos, and regardless of the validity of the existence of this character, people could understand the character and, all in all, take him seriously.
However, even in what seem to be endless possibilities in movie-making, not all attempts are successful. A very good example of this, which shares many aspects with Thanos (namely a full CGI character that people needed to relate to), is the movie Cats. It's a movie that tried to merge a cat and a human, or humanoid figure rather, into one. In many scenes of the movie, the super-accurate human face morphed with the cat body is often quite scary, especially in some specific camera angles. This is also why the trailer was perceived very badly, and ultimately why the movie failed with the public, obtaining a loss of several tens of millions of dollars. Another good example is Sonic the Hedgehog (2020), where filmmakers went back after lots of criticism to remodel the CGI-model of Sonic and make him more appealing. The biggest change here was the size of the eyes of Sonic, but also the general cartoonishness that the character conveys. Apart from models not existing in the real world, there have also been various attempts at deep-fakes. Often used to bring an actor for a movie that has passed away 'back to life', for example in the recent final episode of the Mandalorian. These deep-fakes are very realistic remakes of human beings, but the viewer can nearly always tell that something is off. Sometimes the mouth doesn't move human enough, the eyes have a weird glance, the body doesn't make enough small movements or the lighting is off. There could be many reasons for the model not being perfect, but a viewer can almost always notice. Our brains know that they are being fooled.
So if even movie-makers cannot always model a fixed set of movements on a computer-generated model that we perceive as fully realistic, how could robot-designers possibly overcome this challenge? It is clear that the Uncanny Valley is an often unexpected result, and since it's already clear being on the right side of this valley is a challenge, it stands to reason that other methods of 'staying out of it' are very viable. It's important that different parties don't fall into the same pitfalls their predecessors have, even if they are no experts on the subject.
Users, society and enterprise
users, society and enterprise: Een algemene uitleg over de user geven en concrete users verzinnen en de problemen beschrijven enzo. En dan zonder je technologie te gebruiken laten zien hoe die personen zou kunnen benefitten van onze solution. Marco
The following user groups are involved with the appearance of a robot and would benefit from a consultancy in robot appearance:
Researchers
Researchers will benefit from such a consultancy because researchers can more carefully specify their goals. Generally pushing the state-of-the-art and therefore approaching human-like robots in their research, having a clear view on what can make a robot fall into the Uncanny Valley can help researchers to focus on what prevents those robots to fall into the Uncanny Valley as well. If the goal is better defined, it is easier to reach it.
Clementine, 34 and researcher at TU/e, has made amazing progress with her research! She has successfully found a way for a robot to learn from its environment, much like humans do when they are young. She has created a successful reward and policy system, by which the robot rates all actions it can take and potential better alternatives. Additionally, the robot has implemented balance, and can effortlessly align all its joints for smooth movement. However, the robot is only able to do practical tasks and does not have any form of reading humans, nor the ability to match certain emotions in a social group. The robot would only be required hold basic conversations with humans and is mainly meant for its practical use, but it does need to be accepted and trusted in a social circle. While not complex, trying to have the robot look human in order to be accepted among humans opens the risk of falling in the Uncanny Valley, and therefore, a different approach has to be considered. But Clementine finds it hard to determine what she could implement for facial expressions and face design to have communication go smoothly, but also be relatable or even cute enough to be associated with from a human perspective. To finalize her research, however, she can't overthink it, and Clementine is forced to finish her initial design, hoping the Uncanny Valley is completely avoided.
Designers
Designers will benefit from the various variables that are defined in research, on which the consultancy will be based. Generally staying in front of the Uncanny Valley, they have more freedom in what their robot will look like and could, with our help, very easily select the variables that matter based on the tasks their robot will be designed to fulfill. Designers can design their robots for their intended clients, without risking finalizing a design unappealing to this audience.
Rory, 43 years old, is a lead designer in a team that develops a home buddy. Meant to fight loneliness, this robot does not require too many practical abilities, but it does need a lot of information on humans, their emotions, and human language. Rory has managed to use image classification along with already developed AI for speech recognition to recognize human emotions and, in a way, understand what the person is saying. However, for humans to relate with robots on an emotional level, robots need to have a specific kind of human-ness and cuteness, without frightening the owner. However, avoiding the Uncanny Valley is hard, and several prototypes will have to be made and judged by his team and others before the product can go on the market. After a tiring performance test, where it is specially tested how well received the robot is in social circles, he can finally decide which of the robots can go on the market. When the correct design has been picked, Rory wants to mass-produce it by building production engines that will automatically assemble the robot, but for now, he will have to design every single prototype separately.
Clients
Clients will not have to deal with uncanny robots designed to help them again. This will ensure that no clients will have to fear or feel uneasy with an experimental social robot, which is in their best interest due to the costs and manpower required to care for them that can now be replaced by robots. With a perfectly designed robot in terms of appearance, issues that can not be dealt with very well in the present-day, such as loneliness and depression, can now be battled using these care robots.
Henry, 87 years old, has become mobility impaired over the years. Ever since his wife died, he loved walking by his daughter and grandchildren, who live a few blocks away. But sadly, visiting them has become harder and harder, and he more often than not chooses to not go out, but rather stay at home. Recently, his grandchildren started studying in the city, and his daughter has made a recent career development that forces a lot of time from her. Henry is very proud of his children and grandchildren, but struggles more and more to take care of himself and his house. He also misses visiting his relatives often, and even though he still has old friends of his come over every once in a while, being alone with only the tv gets tiring very quickly. He has thought about getting a pet, but Henry is afraid that he can't take care of the pet as he would hope. Because he is still fit enough to stay in, the local care home could not help him much further, since the best they could do is sending a volunteer every once in a while to check up on him. Life doesn't get Henry down very fast, but even he is starting to realize that these last years have been getting more lonely. All Henry wishes for right now is a buddy who could stay with him for a while.
Social volunteers
Social volunteers will have a massive advantage for this consultancy as a consequence of the state-of-the-art being pushed forward. When care robots in certain more concrete situations can be developed sooner using the well-defined variables for appearance, they can take over 'smaller' tasks of a social volunteer. Social volunteers often like doing their work a lot, but with the shortage it can take a lot of time, which is something not all volunteers have to offer too much of. Using care robots, social volunteers can spend their time elsewhere, maybe even by engaging with the cared-for even more than before.
Implementation details
Implementation details: Uitleggen hoe het bedrijf te werk zou kunnen gaan. Later komt hier ook het resultaat van de research bij Jeroen
We want to create an online platform on which users can receive feedback on the competence and likeability of a robot. There is much data on this topic available at the moment, for example how the appearance of a robot affects humans reactions towards it. Though the combination of the appearance and actual movement is much harder to determine. On our platform we offer two types of services. Users can get assisted on the design of their robot. They will receive feedback during their design process with multiple interaction points with our data analysts. Users can also request a final analysis when the product is finished. The specific terms can then be negotiated, but it could be done in the form of them sending a finished design model that we could judge or one of our analysts could travel to the actual robot to see the performance in real life.
The most amount of work will be the setup of the whole platform and the gathering of the data. For the former we need an intuitive interface. It should be easy for customers to submit and they should be able to share information that might help with the consultancy beforehand. We will support video content to be uploaded along with some easy to answer questions like the purpose of the robot and the target audience. Then the first appointment can be planned, which will be in the form of an online call. During this meeting the initial needs of the client will be assesed and an initial feedback will be given on the design. Based on the users' needs the next actions will be set up to guide them as well as possible.
The data needed to be able to perform such an assessment will be gathered beforehand. A user research will be conducted among people of different age groups. In this research we aim to be able to map the likeability of robots of various levels of humanness compared to their way of moving. At first we do not focus on a specific age group yet. We will transform the data gathered into a usable model that we can use to assess robots in general. Later we plan on expanding the research to increase accuracy. We would like to add the age group of the end user in our model as well as the role the robot will be fulfill, as we expect both to have a significant influence on how much a robot will be liked.
After this initial setup there are a couple key factors to the whole process. First of all we need to maintain the platform and employ enough consultants to serve our customers. Second of all we need publicity, the service we provide is very specific, so it is of much importance that we are able to reach our target audience. At last we would like to improve the current model. As said above, we would like to expant it to other age groups and specify it towards different functionalities for the robot. For this we need more research and more data.
Enterprise structure
Enterprise structure: hierarchy, what kind of company do we need to have? Robert
The enterprise itself will consist of just a few people and will mostly be located online. Clients will be able to send in their robot designs and they would be rated and evaluated online. After the robot attractiveness scale has been created it will not quickly age and therefore we do a need for need full-time data analysts. We do need one or more employees to perform maintenance to the website/platform we will be using and one or more employees that are taking care of advertisements. Thus the body of employees will be relatively small.
Cost and profit
Cost and profit: Evaluate the costs and profit and explain/discuss why it is a decent model for earning revenue. Robert
Since the enterprise will be mostly located online we can decide against hiring or buying an office. There are a benefits to having an office, for example you can have a central meeting point, and therefore we only decide against getting an office if the profits won’t allow for it. If we do so, we might need to hire computers, which can be located anywhere, for hosting the websites and running the evaluations. We should be able to hire only specific amounts of processing power of a computer, therefore we could reduce costs. The evaluation service we offer shall not be expensive, this is because we assume most anthropomorphic robot manufacturers already have a team designated to the design of the robot. Thus we cannot ask too much money, since then they will just solely rely on the designated team for design evaluation. However, making little profit is not necessarily bad, since the costs for maintaining the enterprise and performing the evaluation are also very low.
Aspect of the platform used by clients
As said above, the platform offers the user a way to receive feedback on the design of their robot. They will be able to receive guidance on multiple occasions during their design process, which results in a better performing robot. Organisations focussed on the implementation of such robots can stay focussed on these tasks.
Here you can see a mockup for our platform. We chose a clean design and used mostly bright colors to support the ease of the process. The platform will be relatively small, since it doesn't need much more than a way of people signing up to our services. In the future we would like to expand the platform with a user account system, where all the data of a project could be stored so people could easily access it even at later times.
We offer customer service, in case there are any questions or problems, though we assume this won't be needed much. Since during our services we have a lot of interaction moments, we expect to clear up many questions during those meetings. Though in case customers do feel confused, we want to provide an easy way to state their problems.
When a user would like to utilize our services they can initiate that as expected on the platform. We require them to fill in a couple of questions: - What is the role you're robot is going to fulfill? - What is the target audience of your robot? Next to that we allow them to add more information that might help us in our judgement, they could for example upload images, videos, or even a complete Blender or similar project. We then contect them afterwards to set up a first meeting. The initial information, as said, is used to make a first estimate of the situation and the consultancy the customer might need. In the first meeting this is further elaborated together with the customer. The process will then vary dependent on the needs and requests of the customer. They might only need some small advice or need much more help during their whole design process. During this first meeting also a price agreement will be made according to the customer's needs.
For the final product analysis the total process will even be more varying. We could visit the robot in real life for the most accurate advice, or we could try to consult to our best abilities using digital information we receive from the customer. And as with the first service, the price depends on this as well. We expect that this service will be the most attractive for our customers, since it only requires them to upload their project and fill in some questions to get a response. It also won't take much work from our part to complete such a review, so it will also be relatively cheap. Though we do not guarantee the accuracy of the consult as we were limited in the information to base our judgement upon.
Funding campaign
To successfully start up the enterprise we would need to get some funding from other organizations. Organization that might want to fund the enterprise are stakeholders that would benefit from the service the enterprise offers, like a robotics manufacturer or a care home that uses robots. This funding would be mostly used to hire data analysts to develop the robot attractiveness scales and for the advertisement of the company. These are necessary for the enterprise until it is able to manage and survive on its own. A campaign plan would show what steps would benefit and help start up the enterprise, therefore we have laid out such a campaign plan below.
Step 1: Approach to market. How do you intend to go about selling?
Since our service (evaluation of robot attractiveness) is quite niche we have to specifically target large companies. Any large scale advertising should only be restricted to the robotics industry. A good way to get the attention of companies would be to ask for a visit so we could pitch the benefits of using our service.
Step 2: What is the expected business profits in the next 5 years?
The first 5 years the enterprise would return little revenue, since the scale of attractiveness has to be created and we have to come to the attention of companies. We can offer little in these 5 years, all income has to be spend on advertisement and hiring data scientists. At the end of the 5 years the enterprise should be able to provide more and better services however, so we can promote the enterprise as an investment in the future.
Step 3: What are the relevant skills and qualifications you have to enable you go into this particular business?
Robot attractiveness has its basis in human psychology, since beauty is in eye of the beholder and the beholders are human. Specifically knowing what traits or aspects of robot and human appearance trigger what emotion is essential. It allows us exclusively evaluate each of these aspects to create a scale on which certain influencial traits or human or robot appearance is ranked. Our team has members that are proficient and knowledgable in creating this scale. Moreover, we have a few members that are experienced in processing data and finding patterns in the data. This should also benefit the quality of the end result.
Step 4: What resources would you need to start-up the enterprise and run it consistently for at least one year?
First and foremost, we would need the resources that allow us to reach potential customers and show them the enterprise exists. Specifically this entails contacting and visiting companies and pitching the benefits of using the service our enterprise offers. Beforehand we do need some results we can show, so these need to be prepared. These results could very well be the evaluation of the appearance of already existing robots. We can back the results of the evaluation with findings in research and thus validify our evaluation. Moveover, if we consider the possibility for physical consultation sessions we would need an office. However, we can postpone this until we have properly started up the enterprise. Another necessary resource we need for the enterprise to run consistently is an online platform on which we can perform evaluations, companies can find information on the enterprise and contact us. Lastly, depending on how we create scale that evaluates robot appearance, we might need some more funds for paying participants that take part in experiments or fill in extensive surveys.
Papers and summaries
Bart Bronsgeest
Bartneck, C. et al. (2009). My Robotic Doppelgänger - A Critical Look at the Uncanny Valley
Breazeal, C. & Scassellatie, B. (2002). Robots That Imitate Humans.
Pantic, M. et al. (2007). Human Computing and Machine Understanding of Human Behavior: A Survey.
Adams, B. et al. (2000). Humanoid Robots: A New Kind of Tool.
Mihail Tifrea
Marco Pleket
Robert Scholman
S. Wong. (2017). The Uncanny Valley Effect: Implications on Robotics and A.I. Development
Jeroen Sies
Bar-Cohen, Y., & Breazeal, C., (2003). Biologically inspired intelligent robots.