Skip to content

Latest commit

 

History

History
723 lines (392 loc) · 449 KB

Trust under Financial Risk_ a Simulator involving a Robot Advisor.md

File metadata and controls

723 lines (392 loc) · 449 KB

Trust under Financial Risk: a Simulator involving a Robot Advisor
Andreas Eleftheriades

May 3, 2022

Abstract

Robots have become part of human life even if individuals are unaware of it. As decades have proceeded on, the abilities of robots have advanced to unimaginable levels. Furthermore, as robotic abilities change and develop, there is a need to include robots in other aspects of life such as in cars, operating rooms, and schooling systems. Robots also now take the place of humans in dangerous and deadly situations.

This project is designed to get involved in and analyse the developmental process by carrying out HRI operations successfully in different settings. Human factors such as acceptance, reliability, and trust become relevant when exploring the world of robots and technology. However, humans' trusting behaviour is crucial to ensure a successful deployment of robotic technology in different contexts.

This project will revolve around measuring trust based on a financially risky situation. Through the development of an experiment, we are given the opportunity to analyse factors that could impact user's trust perception of robots. Furthermore, exploration of literature review will help create an understanding of the benefits of human-robot interaction.

The plan of this project is to create a web application that will be a trading simulator which will interact with a physical robot (NAO Robot). The system will determine peoples' trust of robots (in connection to financial risk), what level of trust is found in a high or low risk type of situation, and whether there is an opportunity for future exploration of the topic or if changes need to occur to help enhance human robot interaction and trust.

Submitted to Swansea University in fulfilment

of the requirements for the Degree of Bachelor of Science

Department of Computer Science

Swansea University

Declaration

This work has not previously been accepted in substance for any degree and is not being currently submitted for any degree.

Date:  April 27, 2022

Signed: Andreas Eleftheriades

Statement 1

This dissertation is being submitted in partial fulfilment of the requirements for the degree of a BSc in Computer Science.

Date: May 3, 2022

Signed: Andreas Eleftheriades

Statement 2

This dissertation is the result of my own independent work/investigation, except where otherwise stated. Other sources are specifically acknowledged by clear cross referencing to author, work, and pages using the bibliography/references. I understand that failure to do this amounts to plagiarism and will be considered grounds for failure of this dissertation and the degree examination as a whole.

Date: May 3, 2022

Signed: Andreas Eleftheriades

Statement 3

I hereby give consent for my dissertation to be available for photocopying and for inter-library loan, and for the title and summary to be made available to outside organisations.

Date: May 3, 2022

Signed: Andreas Eleftheriades

1. Introduction 5

1.1 Motivation 5

1.2 Trust Definition 5

1.3 Types of Risks 6

1.4 Aim of Project 6

1.5 Outline of the Project 6

1.6 Summary 6

2. Background 7

2.1 Related Work 7

2.2 Trust as a concept: 7

2.3 Factors Affecting Trust During Human Robot Interaction: 8

2.3.1 Demographic Characteristics 8

2.3.2 Environmental Factors 9

2.3.3 Automated System and Robot 9

2.3.4 Layers of Trust 9

2.4 The Role of Risk During Human-Robot Interaction: 10

2.4.1 Risk Preference 11

2.5 The Role of Explanation During HRI: 11

2.5.1 Explanation Techniques 11

2.5.2 Depth of Explanations 12

2.6 Measuring Trust during HRI: 13

2.7 Web application programming in Javascript 13

2.8 Simulators/Programs/Games to measure trust 13

2.9 Previous Studies Addressing Trust 14

3. Specification 14

3.1 Requirement analysis 14

3.2 Language choices 15

3.3 Technical difficulties 15

3.4 Initial design 16

3.4.1 Trading Simulator 16

3.4.2 User Statistics 16

3.4.3 Controls & User Interface 17

3.4.4 Stocks 17

3.4.5 Graph Data 17

3.4.6 Robot Advisor 18

3.5 Prototype Images of the Trading Simulator 18

3.6 System architecture 20

4. Implementations 20

4.1 Overview of System and Simulator 20

4.2 Main Simulation Implementation 21

4.3 User study results 24

4.3.1 Task 24

4.3.2 Participants 24

4.3.3 Procedure 24

4.3.4 Materials 25

4.3.5 Data 25

4.3.6 Results and Discussion 27

5. Evaluation 29

5.1 Project Management 29

5.2 Conclusion 29

5.3 Future work 30

Bibliography 31

Appendix 32

1. Introduction {#1.-introduction}

1.1 Motivation {#1.1-motivation}

Robotics have been around for decades (Hancock et al., 2011) and robotics systems have grown and diversified along with society. However, with the change in robotics comes a need to justify their abilities and characteristics to the human population (Coeckelbergh, M., 2010). Humans are expected to interact with these robots and at the same time maintain a sense of awareness of their surroundings. Humans have an apparent need to have control over many different situations and hence, having the knowledge of how robots work and being able to successfully supervise their creation will allow an apparent understanding of their main functions (Holthausen et al., 2021).

The main purpose of this paper is to discuss the relationship between humans and robots and whether having trust in the abilities of robots can help ensure their success in the real world. Trust is a key component of human-robot interaction (HRI) and trust is only needed or even exists if there is risk (Hancock, P.A. 2011).

1.2 Trust Definition {#1.2-trust-definition}

Trust can be defined as “the willingness of a party to be vulnerable to the actions of another party based on the expectation that the other will perform a particular action important to the trustor, irrespective of the ability to monitor or control that other party” (Mayer et al., 1995). This definition clearly emphasises that trust is about having an understanding and expectation on another individual or object that will result in a positive outcome and advantage for all parties within the relationship. However, it is also essential to recognize that there are multiple definitions of trust and types of trust, especially when addressing the relationship between human-robot interactions. Therefore, in this paper, trust will be defined in the simplest terms - a firm belief in the reliability, truth, ability and strength of robots and the benefits they can provide to society (Holthausen et al., 2021).

Trust is an important concept and should be widely considered when addressing HRI. Having a sense of trust in robots will provide humans with the ability to use robots in a way that is beneficial to humans in both the short term and the long term (Heyer, C., 2010)

For example, robots have the ability to handle toxic substances, deactivate bombs, and lift heavy loads (Hancock, P.A., Billings, 2011). These are all characteristics of robots that help ensure the safety of humans because without these abilities, humans would be the ones forced to handle dangerous substances which could be detrimental to the human population as a whole. Therefore, in terms of importance, robots are essential in placing humans in a unique position to put their safety above any sort of dangerous activities. This is because robots can be the ones to take the place of the human in each of these harmful situations. In terms of lack of trust, if humans do not trust machines, humans will be forcing each other to address issues that can result in unsuccessful events and on a larger scale, injury and death(Donaldson, M.S., 2000). Furthermore, robots have largely helped the industry enjoy profits and growth as well as created an entire new industry that focuses on the creation of robots (Li, G., Hou, Y. and Wu, A., 2017). Hence, without trust in HRI, the economy can be negatively impacted as robots will no longer be able to perform human most basic functions (Hancock Mata analysis, 2020).

1.3 Types of Risks {#1.3-types-of-risks}

It is important to also address risks that are involved with robots and the factors that affect trust in machines. Robots are implemented in a variety of different contexts: military, industrial, and in-home assistive robots (Stuck, R.E., 2021). The domain in which risk is involved influences the perceptions of risk or risk-taking behaviour. There are three main types of risks applicable to HRI, and these three types are confined to perceived situational risk, perceived relational risk, and risk-taking tendencies for both human-human and HRI (Stuck, R.E., 2021). Further to that, many factors, under the umbrella divisions of human, robot, and contextual characteristics, have now been identified as influencing trust in HRI scenarios. There are also many related factors such as machine related factors, robot related factors and environment related factors that should all be considered when addressing the risks associated with robotics (Holthausen et al., 2021). These different factors remain understudied in literature.

1.4 Aim of Project {#1.4-aim-of-project}

The main goal of this paper is to focus on the development of a web application that can address the factors of risk that may be presented by a task a robot performs itself or the factor of risk that may be presented during the human-robot interaction. The paper will specifically draw on a task that will present a situation which will have low and high financial risk with the main goal of creating a software system that maximises efficiency.

1.5 Outline of the Project {#1.5-outline-of-the-project}

Through the designing of a software system that enables users to buy and sell stocks while a robot advises them on the stocks, the paper will address the specific risks on trust.
The stocks will be divided into two categories, one will be purchasing low risk stock and the other will be purchasing high risk stock. Thus linking to one being low in financial risk and the other being high in financial risk. Furthermore, the robot will be in a two-by-two design with it, on the one hand, being able to provide information and guidance when someone is buying stocks, and on the other hand, providing no information or explanation about the stocks. This design will allow us to determine if participants are more trusting of the recommendations made by the robot versus if the robot did not provide any guidance.
Therefore, the purpose of this paper is to understand when financial risk is involved with two different types of risk, low and high, will people still be willing to trust robots or would they trust robots differently in these two situations.

1.6 Summary {#1.6-summary}

In Summary, the project aims to:

  1. Focus on the factor of risk that may be presented by a task a robot performs itself or the factor of risk that may be presented during the human-robot interaction.

  2. Draw on a task that will present a situation which will have low and high financial risk with the main goal of creating a system that maximises efficiency.

  3. Understand when financial risk is involved with two different types of risk, low and high, will people still be willing to trust robots or would they trust robots differently in these two situations.

  4. Analyse previous user studies as well as develop our own user study to determine user trust and willingness to increase level of trust in technology

    2. Background {#2.-background}

    2.1 Related Work {#2.1-related-work}

    This section is designed to introduce the concept of trust and its definition, which will be the one we are following when we refer to trust in this paper. Another focus of this section is to discuss the factors that affect human trust in robots and the importance of human-robot interaction. We will also study the impact of risk on human-robot interaction and whether the concept of risk affects and impacts human behaviour and their willingness to trust human-robot interaction. Finally, we will also reflect on other explanations and examples that relate to trust and human interactions with robots.

    2.2 Trust as a concept: {#2.2-trust-as-a-concept:}

    When discussing trust and human-robot interaction (HRI), it is important to understand that these two concepts are inherently intertwined (Cominelli, L. 2021). It is also essential to remember that trust is only needed if there are risks involved with the processes unfolding in front of the researcher or the consumer (Holthausen et al., 2021). Trust can be defined in a variety of different ways and there are multiple types of trust that exist in society today as well as specifically when addressing the relationship between human-robot interactions. In this paper, as mentioned in the introduction, trust can be defined as “the willingness of a party to be vulnerable to the actions of another party based on the expectation that the other will perform a particular action important to the trustor, irrespective of the ability to monitor or control that other party” (Mayer et al., 1995). Through this definition, it can be seen how trust and risk are linked to each other. The definition explains that trust is given to another person, object or element in the hopes that it will perform a required action regardless of the risks that will occur. The definition also states that even if the trustor does not have control over the other party, they will still have faith that the actions will be completed to the best of the other party’s ability. Therefore, it is evident that trust requires an understanding and expectation that another party will provide an outcome that is advantageous for the members within the relationship. In this paper, trust will surround the following main terms:

  • A firm belief in the reliability

  • Truth

  • Ability and strength of robots

  • The benefits robots can provide to society

    According to Hoff and Bashir, the concept of trust includes three important components:

  1. When a “trust” relationship is formed, the trustor is the one who is giving trust to another commodity or party. Once the trustor is giving trust, there needs to be a trustee who will accept the trust. This suggests that there are two parties within the relationship, those who give trust and those who accept the trust. Finally, for the concept and action of trust to be successful, there needs to be something at stake for one or both parties. In the case of the Robot Advisor which guides people when buying stocks, the trustor is the individual or group trusting the robot, the robot itself is the trustee who is accepting the trust from the consumer and finally, money or financial assets are what is at stake for the trustor. This is because the consumer is trusting the robot to use their money to buy and sell stocks to make a profit.

  2. The second component of trust suggested by Hoff addresses the concept of incentive. In order to perform the task, the trustee must have some sort of incentive which encourages it to complete its required actions. The term incentive is used very loosely in this case because incentive can vary widely from monetary reward to the desire to help others. The Robot Advisor is an interesting case in terms of incentives. The robot itself does not receive monetary incentives but rather it is designed to offer explanations, guidance and advice to those who are utilising the robot to trade stocks. However, the designer and investors of the robot could potentially receive monetary incentives as they can design the robot to take a percentage of what it earns from the trading it does on behalf of other people.

  3. The final component of trust is that there is a possibility that the trustee will be unable to perform the task or will fail to complete it and thus, there will be uncertainty and risk involved in the actions performed by the trustee. The Robot Advisor has many risks associated with it. Firstly, people are trusting the robot with their money and they are trusting its ability to trade stocks. The risk associated with this includes the robot trading stocks which could result in a loss. Therefore, the trustor is trusting the robot to make good decisions but keeping in mind that the robot can trade and make an error which can result in their financial assets decreasing (Hoff, K. A., & Bashir, M., 2015).

    2.3 Factors Affecting Trust During Human Robot Interaction: {#2.3-factors-affecting-trust-during-human-robot-interaction:}

    When studying and observing the relationship between robots and humans, it is important to note that there are other factors that have the ability to affect the trust within that relationship. Despite trust being defined as the attitude that a trustee will perform the actions required by a trustor in a situation characterised by uncertainty and risk (Lee & See, 2004), there are many factors which can affect the trustor, the trustee, and the action as a whole. Hoff and Bashir as well as other studies determined through analysis that when addressing sources of variability in human-robot interaction, the main reasons for it are caused by:

  • The human operator

  • The environment

  • The automated system

    2.3.1 Demographic Characteristics {#2.3.1-demographic-characteristics}

    When addressing the human operator and human factors that affect trust during HRI, abilities and demographic characteristics are the main components which influence the relationship between robots and humans. In a meta-analysis completed by P. A Hancock, it was determined that human abilities were represented by performance, expertise, or engagement in a specific task (P. A. Hancock Mata-analysis, 2020). These three components were used to determine the user’s competency and skill when performing a task. The demographic characteristics were related to identification of self which included one’s age, race, and gender. A person's abilities and demographic characteristics have the ability to largely influence whether a person is capable of completing a task. Someone who is able-bodied can potentially complete a task differently than someone who experiences some disabilities. Similarly, factors such as education and upbringing can also influence someone’s willingness to trust another party. A person who grew up in a trusting household with technology is more likely to have faith in the ability of robots than someone who was never exposed to robots throughout their life.

    2.3.2 Environmental Factors {#2.3.2-environmental-factors}

    Environmental factors also have the ability to affect human-robot interaction and trust. Environmental factors were those based on concerns such as team collaboration and task variables (P. A. Hancock Mata-analysis, 2020). When addressing team collaboration, the variables included the make-up of the team and the power dynamics that existed within the team. The task variables that were used consisted of the demands of the task and the ability of the group to engage in the task during a specific moment of time. Therefore, it is evident that the environment has the ability to affect the HRI through team dynamics and relationships, the time required to complete, and the difficulty level of the task provided to the participants. For example, if a task is difficult, researchers can expect the participants to need an extended period of time to complete the task.

    2.3.3 Automated System and Robot {#2.3.3-automated-system-and-robot}

    The automated system and the robot itself are also factors that create variability in human-robot interaction. In terms of robot factors, the two main characteristics of robots which were analysed were performance and attributes. The ability of the robot to perform a task and the extent to which it can perform the task are essential in determining whether the robot will be successful. Performance factors can be separated into two groups: the reliability rates when the robot performs an action and the failure rates when the robot performs an action. When addressing the attributes of the robot, researchers include factors such as anthropomorphism and physical appearance of the robot. It is important to note that these aspects do not differ from task to task. They remain constant to ensure that the robot is designed in such a way that the robot itself does not have variability within the software but rather their actions may differ from each other. An essential definition is anthropomorphism which can be defined as the integration of human traits, emotions and intentions into the software of a non-human entity as this will ensure that the entity will be able to understand human interaction better.

    2.3.4 Layers of Trust {#2.3.4-layers-of-trust}

    The above three factors which affect trust, the human operator, the environment and the automated system, also reflect the three layers of trust. The three layers of trust can be identified as dispositional trust, situational trust, and learned trust (Hoff, K. and Bashir, M., 2013). Dispositional trust refers to an individual’s overall tendency to trust automation, independent of context or a specific system. Therefore, this form of trust is more general and refers to automation in a board term, and one’s willingness to trust automation. Situational trust is more specific and depends largely on the specific context of the interaction. The environment has the ability to affect situational trust because should one’s environment be negative, their interaction with the robot is also likely to be one that offers no benefits. Furthermore, in a context-dependent situation, the operator’s mental state can also alter situational trust. Therefore, it is evident that situational trust depends largely and varies greatly depending on the situation where the interaction occurred or will occur. The third type of trust is learned trust, which represents the ability of the human operator to analyse and evaluate the automated system based on their past experiences relevant to the automated system. Learned trust is closely related to situational trust in that it is guided by past experience (Marsh & Dibben, 2003). The main difference between situational trust and learned trust is that situational trust depends on experiences with the environment, whereas learned trust depends on experiences with the automated system.

    In order to assist with the better facilitation of appropriate forms of trust, designers can provide users with information relating to the factors which affect trust. This will help make users aware of what could potentially result in the relationship to fail, as well as the factors which can lead automated systems to be unable to produce desirable outcomes. In order to promote greater trust and discourage automation disuse, designers should consider increasing an automated system’s degree of anthropomorphism, transparency, politeness, and ease of use (Hoff, K. A., & Bashir, M. (2015)). By doing this, it brings forward the concept of humans being more trusting of non-humans who have similar mannerisms. It has been proven that humans are more likely to trust humans than non-humans and hence, by making these non-humans have human characteristics, the chances of humans being trusting in them is likely to increase. Future research directions are suggested for each trust layer based on gaps, trends, and patterns found in existing research (Hoff, K. A., & Bashir, M. (2015)). There are elements of these trust layers which have not been studied in depth and more research factors need to be evaluated in order to determine their effect on a user’s ability to trust an automated system.

    2.4 The Role of Risk During Human-Robot Interaction: {#2.4-the-role-of-risk-during-human-robot-interaction:}

    Whenever a choice is being made, there is risk associated with that specific choice as well as subsequent choices. Risk can be defined as the potential for loss or damage due to the existence of a threat which exploits a vulnerability (Jerman-Blažič, B., 2008). To quantify the risk associated with an action or decision, we need to know two things:

  1. What outcomes are possible and the probability that each outcome will occur (Jerman-Blažič, B., 2008). A person, when making a decision, has to decide for themselves whether the benefits associated with the choice outweighs the risks associated with the same choice. Whenever a choice is being made, an individual will only take the risk if their feeling of trust outweighs their perception of the risk. This suggests that should a person have more trust in the actions of the other party, they are more likely to allow that party to complete its task regardless of the risks associated with it.

  2. Variability is important when addressing risk and trust because it informs us about the level of risk associated with a situation. Even if two actions have the same expected value, they may not be equally attractive: if one alternative has more variability, then it is riskier

    Mayer et al designed a model which helped explain and elaborate on the relationship between trust and risk and how the two are inherently intertwined. Through their research, they were able to determine that there are two different subtypes of perceived risk, namely relational risk and situational risk. Relational risk can be defined as an individual's perception of the risk which is associated with a cooperative alliance with the automated system. Relational risk also addresses the probability and consequences of not having satisfactory cooperation between the human operator and the automated system (Liu, Y., Li, Y., 2008). This type of risk suggests that there exists a dynamic between the human operator and the automated system which is based on an alliance between the two parties and cooperation is necessary for a successful relationship. Situational risks are the risks inherent to the situation or the proximate cause of error. It measures the amount of danger an individual perceives in the environment around them (Situational Risks, 2021). This suggests that situational risks are related to the environment in which the risk is being taken and the external factors which can affect human-robot interaction. Mayer et al was also able to identify the role of risk taking propensity as an attribute of the trustor which largely and directly influences their ability to trust.

    The environment and the domain in which risk is involved has the ability to influence one’s perception of risk and whether one is willing to participate in risk-taking behaviour. Nine primary domains of risk have previously been identified in the literature(Stuck, R.E., 2021):

  • Financial (money impact)

  • Performance (reliability)

  • Physical (damage, harm, impact to health)

  • Psychological (identity effect, sadness, anxiety)

  • Social (how others will view them)

  • Time loss, Ethical (moral beliefs and values)

  • Privacy (personal information)

  • Security risk (crime, threat, safety)

    Each of these nine domains have a relationship and are applicable with the above mentioned types of risk, perceived situational risk, perceived relational risk, and risk-taking tendencies for both human-human and HRI. Perceived situation risk gives the researcher an understanding of how an individual perceives the task that the robot is going to perform. The risk perceived influences the level of trust the individual will have when the robot is completing its required actions. Perceived relational risk describes the perceived risk of interacting with the specific automated system. If an individual views the robot as being capable of completing tasks, they are more likely to trust the robot and hence, they will perceive a lower risk when using the robot. However, if they view the robot as untrustworthy, they will perceive a higher risk when using the robot. Therefore, it is evident that risk taking is domain-specific, so it is essential to understand what risks are relevant to the task or situation, as well as an individual’s risk-taking propensity for that context which will influence their likelihood of taking a risk (Stuck, R.E., 2021).

    2.4.1 Risk Preference {#2.4.1-risk-preference}

    It is important to also note individuals preference to risk. If one is risk-neutral, that would suggest that they are not concerned with risk, but would prefer the option with the highest value, regardless of risk. A risk-averse individual dislikes risk and would rather choose the outcome that can guarantee a positive result. A risk-loving individual likes risk and would rather choose a risky situation than get the same outcome with certainty (Stuck, R.E., 2021).

    2.5 The Role of Explanation During HRI: {#2.5-the-role-of-explanation-during-hri:}

    Robots and their abilities are often foreign concepts for many people and hence, there is room for misunderstandings to occur between what is expected and what can actually occur when a robot completes its required action. This misconnection leads to a reduction of trust by the user and can potentially lead to people being unwilling to use the robot (Stuck, R.E., 2021). By designing adequate explanations pertaining to the actions of robots, researchers and designers can ensure that consumers understand the product or entity to its full extent.

    An important topic to understand is how to generate and present these explanations to both investors and consumers. There is minimal research surrounding how to properly design an explanation that fulfils every requirement, but researchers have determined that there are many levels to an explanation such as the nature of the explanation, the amount of information that should be present, the use of advanced or persuasive language, and how the argument should be displayed (Driver, R., and Osborne, J., 2000).

    2.5.1 Explanation Techniques {#2.5.1-explanation-techniques}

    There are a variety of types of explanations as well as definitions and examples surrounding each type of explanation. The first type explanation mainly focuses on ensuring that the reader or consumer is given information that is desirable to them. This ensures that the consumer is getting an explanation that they can understand and is designed to be appealing to them. However, in doing so, the explanation excluses negative elements associated with the actions being completed and only provides the benefits that could be obtained if one uses the software. In the case of the robot, this type of explanation will only address the benefits of the robot such as its abilities to increase one’s stocks and financial assets. However, the explanation will exclude any discussion on how it has the ability to lose money and stocks. Other types of explanations focus on giving a more descriptive explanation or an explanation that includes both advantages and disadvantages.

    This discussion will focus on five specific explanation techniques:

  1. Teaching
  • Teaching explanations convey to the human concepts and knowledge that the agent has gathered. The explanation is not necessarily associated with a given decision, but rather presents rules or dialogs.
  1. Introspective Tracing
  • Introspective tracing explanations mainly focus on providing explanations that are based on the examination or observation of one’s own mental and emotional processes, as well as provide just enough information that allows an individual to trace the decision-making process. These explanations can be used to guide investigators to determine if someone is responsible when an action goes wrong or fails.
  1. Introspective Informative
  • Introspective informative explanations provide minimal information but ensure that there is enough information should a discrepancy arise between the trustor and the trustee. This information gives the automated system the ability to convince the human operator whether the system has completed its actions or whether it has failed.
  1. Post-ho
  • Post-hoc explanations are designed in such a way to explain what the decision is without explaining what events or processes occurred in order to finalise the presented outcome. This is the type of explanation one often sees when humans interact. Humans can easily state their final decision, however, it is sometimes a challenge for humans to explain why and how they got to their final decision.
  1. Execution.
  • Execution explanations are lists which explain each operation the automated system has completed or will complete in the future. There is also an interpretation of these actions which allows the reader to understand an expected outcome from each action. However, a challenge with this type of explanation is language use and data availability. Should the language be too advanced or technical, readers may struggle to understand the main point or outcome of the action performed by the automated system (Sheh, R.K., 2017).

    2.5.2 Depth of Explanations {#2.5.2-depth-of-explanations}

    Alongside the types of explanations comes the depth of the explanations. Depth can be defined as the perspective of the final decision and the step in the decision making process at which the explanation should begin. The three types of depths that are mainly discussed are attribute only, attribute use, and model. Attribute-only explanations mainly consist of the information about the attributes the automated system considered when it was making a decision or completing an action. This suggests that an in-depth explanation of how the decision was made would not be presented. Attribute-use explanations include information that provides an in-depth analysis and knowledge of the decisions made by the automated system. Finally, model explanations provide information about how the automated system was generated. This helps provide information on how the decision process is being done and hence, allows the reader to understand how the final conclusion was made.

    In terms of the Robot Advisor, the robot is designed to manipulate data under directions, answer questions about decisions it makes regarding the trading of stocks, and make recommendations on which stocks should be bought and sold. Despite its endless abilities, when making a recommendation, it can be expected that the user will require a reason or explanation as to why that specific recommendation or decision was being completed. These explanations are mainly designed to satisfy the user and provide a convincing argument that will encourage further trust between the robot and the user (Sheh, R.K., 2017).

    2.6 Measuring Trust during HRI: {#2.6-measuring-trust-during-hri:}

    Throughout the years, there have been changes in robotics and hence, there have been changes in the way in which people view them and their abilities. There has also been adjustment in the willingness of humans to trust robots. As mentioned above, there are many factors which influence trust and the ability for trust to develop. Therefore, it is essential to understand how trust is measured in order to determine ways to improve trust. There are a variety of ways to measure trust, but in many cases, trust is subjective and hence, trust is measured through subjective assessments. One way to measure trust is through questionnaires which have been the way researchers mainly measure trust in the past. Another way to measure trust is by creating a reliable scale that can measure whether and how much a person’s trust in robots changes. Finally, a third way to measure trust during HRI is through behavioural tasks.

    A challenge with these two assessments is that neither of them truly incorporate the full scope of human trust in robotic systems and hence, there is a question regarding the accuracy of the results from questionnaires and scales since trust is dependent on the individual (Schaefer, K.E., 2016). It is important to acknowledge that robots will continue to develop and hence, additional trust measurement tools need to be designed to guarantee that the readings are accurate. In terms of our model, measuring trust may be a challenge since it is mainly a computer software. However, one way trust can be measured is by asking people their willingness to use the robot when making decisions regarding stock and whether they would be more trusting of a human operator versus an automated system.

    2.7 Web application programming in Javascript {#2.7-web-application-programming-in-javascript}

    There are many different web applications that are built using Javascript. Web applications similar to the one I intend to make which have used Javascript include Facebook, Instagram, Netflix and many more.

    Javascript provides many features that help with Web applications design such as JSX , Virtual DOM, One-way data binding, Component Based Architecture and Declarative UI. These features will help to create fully working Web applications in a shorter time frame. With this, it will allow me to go straight to implementing more finer detailed core elements in the back-end without dealing with most of the front-end graphics. There are a vast number of tutorials and community supporting members that can help with any problems that need solving as well as using react to develop the user interface which is a very popular and simple to use library for this web development part of the thesis.

    2.8 Simulators/Programs/Games to measure trust {#2.8-simulators/programs/games-to-measure-trust}

    Similar games which measure trust are primarily trust-tested games within other users in an environment. Many trust games like the Trust Game or the Investment Game by Berg et al. (1995), the Dishonest Salesman Game, the Trading Game of Lyons and Mehta, the Gift-Exchange Game, The Lending Game and more (Alós-Ferrer, C. and Farolfi, F., 2019.) have helped in the measurement of trust or at least provided us with a manner in which we can test for trust. However, it is very difficult to come across any instances of games that have a user measuring the trust with a robot or an artificial being involved, showing that this idea is unique. This can be viewed as a disadvantage and may result in challenges since much of the research will be conducted first-hand, and a simulator will be designed with little background knowledge since there is nothing with which to make an accurate comparison.

    2.9 Previous Studies Addressing Trust {#2.9-previous-studies-addressing-trust}

    There are a few studies which have been completed in the past which also address the concept of trust. The first study was done with the goal of understanding what factors play a role in influencing a user’s willingness to trust technology. The study concluded that a user’s willingness to trust technology depends largely on the type of technology being used, the user, and the task that the technological system will be performing (Xu et al., 2014). The other study was more designed to examine trust levels and trust perceptions. There were two main types of persuasive technologies used in this experiment, the one was a health application and the other was an educational game on environmental issues. There was a questionnaire which was designed to measure whether the user displayed trust. The study was able to determine that user trust has the ability to change over time. The study was also able to decipher that lack of trust should be the main concern of an individual designing the technology since trust has the ability to be developed(Ahmad, W.N.W., 2018).

3. Specification {#3.-specification}

3.1 Requirement analysis {#3.1-requirement-analysis}

In an attempt to solve this research question, the web application will be designed to run the trading simulator. It has 3 code files to execute the command prompt. One code file is to activate the web application to run in the local browser. The second file is to activate the database API to run locally. The final file is to activate the robots API script which will allow the user requests help from the web application file will send text commands to the NAO robot over the network. The hardware needed is a computer strong enough to be able to run a web browser and access the internet. It is more convenient to run the program on a local browser instead of hosting it on a website because the purpose of the application is to use it in a user study and have the NAO robot be present and next to the participant when they take part in the experiment. Furthermore, the reason the system needs to be connected to the network is because the system is running with mongoDB which is a database system that relies on the internet to work.

A NAO robot is going to be use for the robot in the experiment as not only was this robot physically available at the time for experimental use but the NAO robot can help simulate an almost live situation as it is widely accepted as a socially assistive robot, which communicates with users socially rather than physically (Amirova, A. 2021). The NAO robot in this experiment will just be appearing in front of the participant and the web application will be transmitting commands to the robot. The robot will then verbalise the commands as if it was actually studying the trading market and giving reliable information to stimulate a human-robot situation where financial risk and trust are present.

3.2 Language choices {#3.2-language-choices}

JavaScript with React as a library is used for web application development, compared to using python with Django or many others. This is largely because Javascript is seen as being versatile and is the most popular programming language for web development as well as for react which gives a sense of ease for writing server-side code in JavaScript node.js will be used. It is said that using React can minimise the complexity of the web application development process (Aggarwal, S. and Verma, J., 2018.).

As for a package manager for Node.js (which is for creating a simple web server for the web application), Yarn has been chosen compared to NPM as being developed by companies like Facebook. Yarn is also quite popular, especially when programming with React development. The reason Yarn is favoured over NPM is because Yarn addresses some of the performance and security shortcomings of working with NPM.

Finally, MongoDB was chosen for storing the participant’s data in a database for when they are trading. MongoDB was chosen for no particular reason, however, it is believed that it matches nicely when using react and is a popular NoSQL database to use when making a web application.

A list of the technologies used in the program development can be found in the appendix.

3.3 Technical difficulties {#3.3-technical-difficulties}

When programming the trading simulator, one challenge is getting the application to give an almost real experience, especially the way the stock’s price graph looks. Furthermore, when testing the web application, it became clear that despite removing so many bugs in the program, there were times that simple problems were not as easily solvable just by looking at the program. Therefore, one major technical difficulty which was noted was the lack of ability to solve all minute issues in the time frame of the project.

In addition, robot advisors are meant to be able to analyse the stock's and give the appropriate trading entries that will help the user when they trade. Collecting accurate information in regards to how the robot should talk, more specifically the word structure and sentences it can use to effectively talk and convince the participant to buy stocks or listen to the robot's advice is vital. This is mainly because an individual is more likely to trust an automated machine if they have similar mannerisms to humans. Therefore, factors such as word and sentence structure plays a part in trust. However, it is very complex to get a robot to analyse stock. In this experiment, since we are collecting data, we can guide the robot on what to say. Hence, the technical difficulty is taking the time to guide the robot and design the program in a meticulous way that it is capable of collecting data on its own without having us input data.

Another difficulty is creating the appropriate advice for which commands to work with and hence, allow for the robot to give the correct output based on the graph data of the stocks. The robot needs to constantly define the answer based on the coordinates of the graph - price action for the stocks. It is vital that the robot needs to report the correct information in order for it to gain trust of the user. However, it is a challenge to determine these words early on and hence, as the robot progresses, we will be able to determine which words are preferred by the user when they are trading versus which words are unappealing.

3.4 Initial design {#3.4-initial-design}

The program is a trading simulator that will try to simulate a situation which induces financial risk in a game-like design for the user as well as send commands to the NAO robot to interact with the user.

Crypto & Stocks is an app where you can buy and sell stocks. It starts with a modal box to get your basic information, the time limit of the program, and goal balance to reach. There are two types of stocks: high risk stocks and low risk stocks. Each stock has five different types of stock with different prices. By clicking on the trade button from the list, a graph will appear which shows current stock price with previous history. You can buy and sell stocks there within the given time and after time is finished, a message will be shown indicating whether your goal is attainable. In addition there is a page where the user can view their trades on going and have sold. This section will be broken down into subsections which will shed light on the important parts of the program.

3.4.1 Trading Simulator {#3.4.1-trading-simulator}

There are three main sections to the trading simulator: the participants' roles, the account data, and the setting configuration. Each section has a unique purpose and criteria which it has to fulfil:

  1. Participant's Roles - There are 3 main roles that a user should follow to progress through the application. These roles will be a core part in the user's trading evaluation.

    1. Trading - increase their trading balance by ‘buying’, ‘holding’ and ‘selling’ stocks, in order to reach target balance in a given time countdown.
    2. Observing - viewing the price of the stock as the price has high or low risk volatility, the balance they have been given, and the time limit to reach their desired ‘trading goal’.
    3. Listening - Paying attention to the advice given by the robot to determine the possible course of action which will result in the best outcome.
  2. Account Data - this will not only help the participant keep track of their stocks they have purchased and sold, but it will show how the participant purchased stocks when interacting with the advisor

  3. Setting Config - a set-up window appears at the start before the participant can start the experiment, to configure the participants' data like balance, balance goal, time limit, the type of stocks they will trade with (either high risk or low risk) and whether the robot advisor is giving an explanation or not

    3.4.2 User Statistics {#3.4.2-user-statistics}

    In the trading simulator, user statistics are very important to encourage the user to progress through the program in order to trade, typical with most simulators.

    The balance can decrease or increase within the user statistics by a number of different ways. The balance can decrease or increase by selling a stock at a lower or higher price than when it was bought at. If the balance is zero, the trader can no longer reach the target balance in the time limit and therefore will fail. If the trade reaches or surpasses the goal balance they may continue on trading until the time limit expires.

    The users account page displays their active and past trades as well as requests they have made from the robot. All information is displayed in a clear way for them to view and make deductions as all traders are given in the normal environment. This also allows the trader to determine whether they are making good or poor decisions, and whether the advice from the robot needs to be considered more in-depth or not at all.

    3.4.3 Controls & User Interface {#3.4.3-controls-&-user-interface}

    Most web applications, or in this case a web based trading simulator, one is able to trade and interact with a physical robot. The controls are presented in a simple and clear way, as well as functioning optimally. The controls will be buttons and hyperlinks which will allow the user to interact with the system. The user will be able to move between the dashboard, individual stock trading pagers, or account data as well as configure the users settings, making it as convenient as possible for the user.

    For the user interface, the design of the website will present the features and the overview of the application with minimal and clear features. This will ensure the removal of any unnecessary complexity as seen on most trading websites and simulators. React will help in producing a clean website for the front end. One essential feature is the design of the stock pagers which is where the stock graphs demonstrate the movement of the stock's price. By keeping the design simple and presentable, the user will be able to understand how the system works.

    3.4.4 Stocks {#3.4.4-stocks}

    Mentioned before, this web based trading simulator will have 5 high risk or volatile stocks and 5 low risk stocks. Depending on the setting configuration, the two types of risk will influence the participant's behaviour when interacting with the robot advisor and when they trade the stocks.

  • High risk stocks: One would logically use stocks that one is aware are known for being risky such as cryptostocks like Ethereum, Litecoin, Bitcoin cash, Ripple and Cardano**. Therefore, these will be used in the program as high risk stocks.

  • Low risk stocks: The system will use general low risk stocks which have been chosen to show minimum price fluctuation. The specific stocks which will be used are Irobot, Alibaba, Tencent, Roku, and Netflix**.
    ** The stocks that have been chosen are nothing more than example stocks. They do have any particular benefit to the program but using relatively well known companies and crypto will make the application more realistic.

    3.4.5 Graph Data {#3.4.5-graph-data}

    The trading simulator graph data for each stock’s price value will use static Historical data found on the nasdaq.com website. The purpose of this is to bring in a realistic feel for the traders and provide them with a clear image of what events are occurring with the stocks.

    To demonstrate within the simulators how to visualise the stock's price, the system will use React Sparklines to create the graph functions. This will gradually modify and change the graphs to produce, in a live-way, what the stock's price will be over the trading period for the participant

    3.4.6 Robot Advisor {#3.4.6-robot-advisor}

    This is one of the key features of this program. The simulator will be interacting or sending stock data to the robot. The robot advisor needs to offer a clear and useful explanation or minimal information about the stocks in a coherent and easy to understand way. This will help influence the participants' perception of trust for this paper and find out how human robot interaction impacts trust with the factor of risk taking precedence.

    As stated above, a NAO robot is used which will be offering advice to the user which will likely influence their behaviour. A ‘request advice’ button that is found on each stock's trading page will be easily accessible to the user. Upon participant's activation, the robot advisor will communicate key information based on the stock and the user will try to determine whether it's reliable and whether the user wants to make the trade.

    The information will be based on how far the stock's price has proceeded using the time the user has been trading that stock. If the user requests the advice, then the robot will say possible entries like “ buy at 300 and sell at 320”. In addition, if the user has the robot explanation feature active, a further explanation by the robot will be said like “ due to analysis there is a 93% success rate for this trade”. The explanation will need to be eroded in a specific way to encourage trust between the user and the robot as the user is more likely to make the trade if the user trusts the robot's opinion or input.

    3.5 Prototype Images of the Trading Simulator {#3.5-prototype-images-of-the-trading-simulator}

    Below are images to show the initial possible idea that the web application would look like:

Figure 1 (The settings configuration window ) Figure 2 (The dashboard)

Figure 3(An individual stock’s trading page)

Figure 4(The Account History page)

3.6 System architecture {#3.6-system-architecture}

	Belows image is to share what the system architecture would look like and behave.

4. Implementations {#4.-implementations}

4.1 Overview of System and Simulator {#4.1-overview-of-system-and-simulator}

The program is a website that simulates a trading platform which is designed mainly to record the participants actions and behaviour when interacting with a robot advisor whom the user can physically see and listen to. The Robot advisor’s purpose is to stimulate a real life environment where people interact with a robot, the idea here is to see how the 3 areas of trust are affected when people interact with a robot in a situation where financial risk is dominant. The website sends commands to the robot advisor and the users data is stored in a cloud-based non-relational database management system. Below is an image of the setup.

4.2 Main Simulation Implementation {#4.2-main-simulation-implementation}

In the next few pages, a simulator will be used to explore 4 scenarios:

1. High Risk with no Robot Explanation

2. Low Risk with no Robot Explanation

3. High Risk with Robot assisting with Explanation

4. Low Risk with Robot assisting with Explanation

The settings configuration window allows one to configure the participant’s setup for which we will observe and study their participation in the study. The key features are explained below:

a) The participant’s unique name is taken to identify each sample.

b) Time duration (between 1 to 10 minutes) is the time allowed per simulation

c) Robot explanation: the robot advisor will give an explanation or not if requested,

d) Balance is the trading allowance the participant will use in the study,

e) Goal balance is the maximum money achieved/ target the participant needs to reach in the given time limit and finally whether they will be given

f) High risk stocks or low risk stocks to trade with.

The Dashboard of the trading simulator above, displays the users name, their balance, their goal balance, and time in the game. As seen below, the participants will be able to see the list of stocks that they can trade with, either high risk or low risk stocks depending on the setting configuration they have been given. The participant selects the stock from the list of high risk or low risk stock to trade with, by selecting the stock’s trade button.

The navigation bar on the top has hyperlinks allowing the user to return to the home page (Crypto & Stocks) or view their Account trading history by selecting the ‘account’ link.

The timer activates as soon as the participant selects a stock to trade, and it counts down as they trade. This gives the user the opportunity to see how much time they have spent trading, and how much time remains for them to trade.

When the timer reaches zero a message will appear to stop the simulation and display whether they are successful in reaching the goal balance or not.

When a stock has been selected, the participant is taken to the stocks trading page. There the trader can start trading by buying, selling, or holding stock. A simulating graph will also be depicted so that the user will be given an almost live experience of the stock's price changing when viewing the stock’s graph fluctuation. This will allow them to buy or sell the stock when they feel it is appropriate. Furthermore, at the bottom of the page, the user will be able to see which stocks they have bought, and they will also be able to decide whether they want to sell that particular stock by selecting the sell button.

The trader can also request help from the robot advisor to give him a verbal assistance/ explanation if it was the selected choice in the settings configuration page.

The account history page has been designed to show the participants trading history in the simulator. It displays key information like the stocks name, amount of stock purchased, the price the stock was bought at, the price the stock was sold at, when the stock was bought, when it was sold, and how much profit they received. If the participant is holding a stock, the account history page displays that the trade is still active so the trader can sell the stock.

It will also hold robot assistance requests, displaying what stock they ask help for and at what time they requested help. By the number of requests and time of request, we can conclude whether the participant trusted the robot advisor and purchased stock based on the advice (requests entry times are before the trade took place).

4.3 User study results {#4.3-user-study-results}

The questions will be based on the Trust Perception Scale by Schafeur (Schaefer, K.E., 2016). The paper's user study method provides a Checklist for Trust between People and Automation trust scale, which we will use in an attempt to solve this paper. However, we will use the 14 item test which will focus on the antecedents and measurable factors of trust specific to the human, robot, and environmental elements. By using this test, this resulted in a 40-item Trust Perception Scale-HRI and the 14 item subscale. It is important to use this as it will allow us to gain a deeper understanding of the relationship between humans and robots, and whether trust exists in different environments.

This scale is very interesting and largely helpful to people studying trust in respect to humans and robots. It is said that this scale benefits the future robotic development specific to the interaction between humans and robots. Due to this specific reason, this will be the user tool used in this experiment in an attempt to prove trust.

The 14 Item subscale can be used to provide rapid trust measurements specific to measuring changes in trust over time, and is capable of ordering assessment with multiple trials or time restrictions. This subscale is specific to functional capabilities of the robot, and therefore, may not account for changes in trust due to the feature-based antecedents of the robot. Trust score is calculated by first reverse coding the ‘have errors,’ ‘unresponsive,’ and ‘malfunction’ items, and then summing the 14 item scores and dividing by 14. This will give us a value which we will pass into an Anova single factor test. Additionally, a t-test comparing one scenario of low risk with explanations versus without explanations. Similarly, a t-test will be done comparing a scenario where high risk is involved and explanations versus no explanations will be compared.

Since we have 4 scenarios, an Anova single factor test will be used as it helps find Statistical differences among the means of two or more groups and to find out if the associated population means are significantly different (Morgan and Barrett, K.C., 2004).

4.3.1 Task {#4.3.1-task}

	There are 4 tasks/scenarios that will be used for this user study is;
  1. low risk stocks and the robot gives no explanations.

  2. low risk stocks and the robot gives explanations.

  3. High risk stocks and the robot gives no explanations.

  4. High risk stocks and the robot gives explanations.

    Then for each task the participants will answer the 14 questions based on the Trust Perception Scale by Schafeur.

    4.3.2 Participants {#4.3.2-participants}

    10 participants were selected, however the users participating in the forms, to remain anonymous, will be represented as Participant 1, Participant 2, …, Participant 10

    4.3.3 Procedure {#4.3.3-procedure}

    Each Participant will watch the intro video, follow the task order, after each video they rate trust score questionnaire and at the end they give answers to the experience questions, which are:

  5. What are your opinions on robots advising on stocks?

  6. Would you trust robots in financially risky situations? Why?

  7. Do you have any thoughts or suggestions you want to add, that came to you about this user study?

    4.3.4 Materials {#4.3.4-materials}

     This is the google form questionnaire used for the study:
    

    https://docs.google.com/forms/d/1NOCC1bSzXVb7kj8CtZl5Ok2uoyD-I9qaeEL88BbJSsU/edit

4.3.5 Data {#4.3.5-data}

		**Quantitative questions with the Anova test results and T-test:**

ANOVA

T-test

Null Hypothesis: The difference of the group means is zero suggesting that people are indifferent to the advice of the robot.

Alternative Hypothesis: The difference of the group means is not zero suggesting that people are influenced by the advice of the robot.

T-Test P-Value Result (α = 0.05)
0.47999 0.326323 Not significant (do not reject null)
0.72639 0.245582 Not significant (do not reject null)
**Qualitative questions:**
  1. What are your opinions on robots advising on stocks?
  • Compared to other robots that only help you through text messages or just a computer speaking, I think it's great to get advice on stocks from a robot that has the ability to move and speak to you in real time. Almost like a real advisor helping you with trades.
  • I think this technology is best suited for high-frequency trading or as a desktop GUI application.
  • I think that the robotic interface is absolutely unnecessary and is an easily implementable application interaction layer for any trading analytics engine.
  • I'm neutral on the situation. I can see the positives and negatives of both sides.
  • Great idea, should definitely be more explored!
  1. Would you trust robots in financially risky situations? Why?
  • I would trust robots if they have been tested properly in real world trading situations. This robot was performing nicely in the videos, however more data should be gathered for any kind of robot in my opinion.
  • No, as if any strategy became wide-spread then another strategy could be created to counteract the popular one.
  • Machines already run the stock market and as such, if the machine is reputable enough to make good predictions with a good track record, there is no reason to not trust it.
  • No, experts on the past and current financial market are continually making mistakes.
  • Maybe not at the moment because you would something tried and tested when it comes to risky situations and finance but with enough testing I cannot see why not
  1. Do you have any thoughts or suggestions you want to add, that came to you about this user study?
  • I noticed that the robot speaks a bit too fast, if you can slow it down a bit it would be great and would feel more natural.

  • The survey is very difficult to comprehend. There are a lot of flaws to fix. There are too many core issues with the structure of the survey for me to enumerate.

  • really impressive, could be turned into something really big and has lots of potential

    4.3.6 Results and Discussion {#4.3.6-results-and-discussion}

    People’s opinions and feelings towards robots and human-robot interaction are challenging to decipher. Data for this experiment was done through an online survey which was sent to individuals at Swansea University. There were a total of eight participants who took the study to share their thoughts regarding fourteen different questions all relating to robot ability, functionality, and effectiveness. Once the surveys were completed, the data was collected in a table as seen in the data section. There are four tables each expressing either a low or high risk situation, and whether or not there was a robot giving an explanation. As seen in the above anova test, the p-value was greater than the significance level. This suggests that no differences were observed between conditions. Additionally, the ANOVA test explained that whenever a robot is involved in a financially risky environment the trust remains the same. Therefore, no change in trust occurs despite a difference in the level of risk or whether the robot explains the data or not. Similarly, t-tests were done to compare low-risks situations with and without explanations, and high-risk situations with and without explanations. Similarly, both t-tests indicated that change in trust was not evident and hence, explanations were not largely impactful.

    There are numerous reasons as to why the p-values were not significant in both the t-tests and the ANOVA tests. Firstly, the study was online and hence, the results need to be seen in this light. When a study is conducted online, people may not take the study as seriously or may not fully understand what is being asked of them. Additionally, the study was lengthy and hence, towards the end, people may not have been willing to participate in the study with full concentration. Furthermore, there were only eight participants in the study. This suggests that there may have been potential biases or an inaccurate representation of a larger population. Because there were only eight participants, it is challenging to clearly state that the population in its entirety would have the same opinion of these eight participants. These two reasons explain why the study results may have been inaccurate. Should the study be completed on a much larger scale, data could suggest a completely different outcome.

    There appears to be mixed feelings regarding robots advising on stocks. As seen by the qualitative data, some believe it is a good idea and can provide benefits to people who want to trade whereas others appear to be hesitant as there are some negatives associated with having a robot advise on what to purchase and when to purchase it. The main concern from those who have negative feelings is simply trust. Individuals struggle to trust technology which has not been previously explored or has finances involved. Younger generations are more money-conscious (Selingo, J.J., 2018.) and hence, taking a huge risk by trusting an automated system could be stressful since one would have to trust that the robot has good and accurate knowledge regarding trades. However, as seen by the comments it is evident that some participants were aware of the benefits regarding having a robot that can give advice. The main purpose of the robot is to analyse stocks and provide feedback to the user to ensure that they get access to the best information regarding different stocks. It is clear that the user will need some sort of assurance that the information provided is accurate and following that, they will be more likely to trust it.

    Financial risk is further analysed in the second question of the qualitative data section. When asked, the participants gave their opinion on their willingness to trust a robot in a risky situation. Some of the participants believed that they would trust the robot in a risky situation if it had been tested properly, if it had the knowledge to make accurate decisions, and if it was capable of resisting any hacking or disruptions from other types of technology. Other participants were more concerned about errors in the financial market and stated that human experiments are continuously making mistakes. This fear is valid because the robot would be coded by humans. However, it is important to note that even though the robot will be originally coded by humans, it will have the ability to analyse data and figures objectively and without any biases. This will help ensure that the information or advice given to the user is based solely on the data of the stock market and not on human emotions or concerns. Additionally, the continuous testing and use of the robot will help ensure its ability to perform its required actions to the best of its ability. The robot will be designed in a way that will allow change and adaptation as the robot is used. The most important aspect of the robot is that it will analyse data and information that is evident on the stock market without having humans interfering with its abilities.

    The final question of the survey asked for any additional comments or opinions. One comment was that the robot spoke too fast. This is a valid concern as users would want the robot to be more natural and easily understandable. An error like this is easily fixable. When designing a robot, it takes multiple attempts to obtain an accurate speed of voice and sound of voice. However, through trials like this one, we will be able to determine what elements of the robot could be more appealing to users.

    Human and robot interactions are challenging to analyse, but not impossible. Through our study, we were able to determine a foundation of how to design a study, what questions to ask, and how many participants to include. Since the original study was the first of its making, a lot of thought had to go into it as we wanted to ensure users could understand and comprehend what was being asked of them. As seen by the data, further analysis needs to be done to determine a more accurate relationship regarding whether people’s trust can be changed or improved, especially when dealing with technology.

    5. Evaluation {#5.-evaluation}

    5.1 Project Management {#5.1-project-management}

    In my initial document, I created a Gantt chart with my estimated work schedule for this

    project. I believe I have stuck to and completed the majority of my original aims for this project to a degree with which I am happy. I was not able to stick to my estimated work schedule because as I learnt more about how to program a web application subject and implement it, I realised how much more work it was than initially expected. This meant that some of my original deadlines for completing my aims were moved back. However, I do not believe this impacted my work too much as I was still able to complete my intended goals. Another factor that pushed back my deadlines were other module’s coursework deadlines. There were times when I had to prioritise other tasks over some of my deadlines.

    I completed my aims in the following order. Firstly, I programmed the code to get the website functional with the necessary pages, followed by getting the database to push and pull data to the website and then the stocks graph. As I came to understand working with API’s, I started programming the robot advisor and sending the necessary messengers as needed to the NAO Robot. During this time, I was able to find more issues and bugs with my implementations and get more values for each of the 10 stocks.

    For the user study, I was easily able to prepare it, by making the necessary videos and completing a user study form for which I was able to get the necessary number of participants and conclude the results. However due to covid it would have been nice to have the participants partacte in person in order to collect more accurate participant data.

    Overall, I am happy with my aims and the work I have done. However, I underestimated how much time was needed to work on each section based on my initial time management for this project. A way that I think would have been best when approaching this project was to do more research beforehand into the topic before creating a plan to better understand how long it would have taken in completing each section and problem. I should have also taken into account the other deadlines of my other modules coursework when developing the implementation for this project.

    5.2 Conclusion {#5.2-conclusion}

    In this paper we discussed the relationship between humans and robots and whether having trust in the abilities of robots can help ensure their success in the real world. Trust is a key component of human-robot interaction (HRI) and trust is only needed or even exists if there is risk. The definition of trust clearly emphasises that trust is about having an understanding and expectation on another individual or object that will result in a positive outcome and advantage for all parties within the relationship. Therefore, trust is defined as the reliability, truth, ability and strength of robots and the benefits they can provide to society.

    In this paper we used low and high risk stocks by utilising the robots' advice when needed. This design allowed us to determine if participants were more trusting of the recommendations made by the robot versus if the robot did not provide any guidance. Our data indicated that there was little change in a person’s willingness to trust the robot regardless of if there was an explanation. However, despite not finding a relationship in this study, there is still room for the topic to be further explored. Using a larger experimental group as well as rewording some of the survey questions may change the results of the study and hence, give a different outcome.

    The environment of the study also created a challenge for the participants. Due to COVID-19 and remote interactions, the survey was presented in an online form. There was also minimal communication between the participant and the designer of the survey. Therefore, it was slightly challenging for the participants to present any questions or concerns. Additionally, the study was done on eight college students which suggests that they may not have had the knowledge to understand trading and what it means to trust in a risky situation. This relates back to the concept of population and having an accurate representation of the population who would use this robot when trading in the real world.

    Trust in today’s world comes in different forms. People trust each other in a variety of ways and hence, this indicates that there is always a way to earn and lose trust. Despite individuals being more likely to trust other individuals, trusting another human in a situation regarding risk and finances could be more risky than trusting a robot. Robots can be designed in a way that guarantees objectivity rather than subjectivity. When a robot is created, the biases are removed and one can gain accurate information without fear that another party was involved.

    This research was both challenging and interesting as it required a deep analysis of robotics and human willingness to trust or develop trust in the unknown. Through this paper, a software system was designed which enabled users to buy and sell stocks while a robot advises them on the stocks. A deep understanding of financial risk was also developed and both types of risk, high and low, were analysed and examined, and trust was explored in regards to both types of risk. Once the project was completed, there was a clear definition of trust and risk in human-robot interactions. Therefore, the results of the user study was inconclusive and hence, there needs to be future analysis and designing of studies to further determine whether there is trust in human-robot interactions or whether humans prefer human-human interactions. .

    5.3 Future work {#5.3-future-work}

    For the future, I wish to build on the user study sample data, by getting more participants and have them run the trading simulation in person, not via a video. The reason this was not possible this time was due to COVID and lack of time, thus I was forced to make an online survey where the user study was with videos - getting people to view them and answer questions. If onsite, I could view the users' trading behaviour and trade records as their performance changes while trading with the robot in each scenario. Another improvement for the future is dealing with one more technical difficulty that is knowing better robot communication commands, mannerism, that may result in different trust based results and user performance. Finally, future studies need to be done using larger populations to represent the potential users in the future.

    Bibliography {#bibliography}

  1. Hancock, P.A., Billings, D.R., Schaefer, K.E., Chen, J.Y., De Visser, E.J. and Parasuraman, R., 2011. A meta-analysis of factors affecting trust in human-robot interaction. Human factors, 53(5), pp.517-527.

  2. Hancock, P.A., Kessler, T.T., Kaplan, A.D., Brill, J.C. and Szalma, J.L., 2020. Evolving trust in robots: specification through sequential and comparative meta-analyses. Human factors, p.0018720820922080.

  3. Stuck, R.E., Holthausen, B.E. and Walker, B.N., 2021. The role of risk in human-robot trust. In Trust in Human-Robot Interaction (pp. 179-194). Academic Press.

  4. Mayer, R.C., Davis, J.H. and Schoorman, F.D., 1995. An integrative model of organizational trust. Academy of management review, 20(3), pp.709-734.

  5. Coeckelbergh, M., 2010. Robot rights? Towards a social-relational justification of moral consideration. Ethics and information technology, 12(3), pp.209-221.

  6. Heyer, C., 2010, October. Human-robot interaction and future industrial robotics applications. In 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems (pp. 4749-4754). IEEE.

  7. Donaldson, M.S., Corrigan, J.M. and Kohn, L.T. eds., 2000. To err is human: building a safer health system.

  8. Li, G., Hou, Y. and Wu, A., 2017. Fourth Industrial Revolution: technological drivers, impacts and coping methods. Chinese Geographical Science, 27(4), pp.626-637.

  9. Jerman-Blažič, B., 2008. An economic modelling approach to information security risk management. International Journal of Information Management, 28(5), pp.413-422.

  10. Hoff, K. A., & Bashir, M. (2015). Trust in automation: Integrating empirical evidence on factors that influence trust. Human Factors, 57(3), 407–434. https://doi.org/10.1177/0018720814547570.

  11. Hoff, K. and Bashir, M., 2013. A theoretical model for trust in automated systems. In CHI'13 Extended Abstracts on Human Factors in Computing Systems (pp. 115-120)

  12. Lee, J. D., & See, K. A. (2004). Trust in automation: Designing forappropriate reliance. Human Factors, 46, 50–80.-

  13. Sheh, R.K., 2017, October. Different XAI for different HRI. In 2017 AAAI Fall Symposium Series.

  14. Cominelli, L., Feri, F., Garofalo, R., Giannetti, C., Meléndez-Jiménez, M.A., Greco, A., Nardelli, M., Scilingo, E.P. and Kirchkamp, O., 2021. Promises and trust in human–robot interaction. Scientific Reports, 11(1), pp.1-14.

  15. Liu, Y., Li, Y., Tao, L. and Wang, Y., 2008. Relationship stability, trust and relational risk in marketing channels: Evidence from China. Industrial Marketing Management, 37(4), pp.432-446.

  16. Situational Risks (2021). Available at: https://www.nsc.org/workplace/safety-topics/work-to-zero/hazardous-situations/situational-risks (Accessed: 31 October 2021).

  17. Driver, R., Newton, P. and Osborne, J., 2000. Establishing the norms of scientific argumentation in classrooms. Science education, 84(3), pp.287-312.

  18. Schaefer, K.E., 2016. Measuring trust in human robot interactions: Development of the “trust perception scale-HRI”. In Robust Intelligence and Trust in Autonomous Systems (pp. 191-218). Springer, Boston, MA.

  19. Alós-Ferrer, C. and Farolfi, F., 2019. Trust games and beyond. Frontiers in neuroscience, p.887.

  20. Xu, Jie, Kim Le, Annika Deitermann, and Enid Montague. "How different types of users develop trust in technology: A qualitative analysis of the antecedents of active and passive user trust in a shared technology." Applied ergonomics 45, no. 6 (2014): 1495-1503.

  21. Ahmad, W.N.W. and Ali, N.M., 2018. A user study on trust perception in persuasive technology. International Journal of Business Information Systems, 29(1), pp.4-22.

  22. Amirova, A., Rakhymbayeva, N., Yadollahi, E., Sandygulova, A. and Johal, W., 2021. 10 Years of Human-NAO Interaction Research: A Scoping Review. Frontiers in Robotics and AI, 8.

  23. Aggarwal, S. and Verma, J., 2018. Comparative analysis of MEAN stack and MERN stack. International Journal of Recent Research Aspects, 5(1), pp.127-32.

  24. Schaefer, K.E., 2016. Measuring trust in human robot interactions: Development of the “trust perception scale-HRI”. In Robust Intelligence and Trust in Autonomous Systems (pp. 191-218). Springer, Boston, MA.

  25. Selingo, J.J., 2018. The new generation of students: How colleges can recruit, teach, and serve Gen Z.

  26. Morgan, G.A., Leech, N.L., Gloeckner, G.W. and Barrett, K.C., 2004. SPSS for introductory statistics: Use and interpretation. Psychology Press.

    Appendix {#appendix}

    key code implementation methods used

    Tech Stack:
    Frontend:

    ● React
    ● Redux (saga)
    ● React strap
    ● Styled Components
    ● React hooks
    ● Use Form
    ● Container pattern
    ● Selectors
    ● React Notifications
    ● React Sparklines for Graph

    Backend:

    ● Node Js
    ● Controllers
    ● Express Router
    ● Services
    ● Mongoose