PRE2024 3 Group17: Difference between revisions

From Control Systems Technology Group
Jump to navigation Jump to search
Line 523: Line 523:


=== Interviews ===
=== Interviews ===
Conducting interviews was a vital aspect of our research as they facilitate the collection of qualitative data, which was is important for understanding subjective experiences of users. Collecting qualitative data about user experiences and thoughts would capture in-depth insights. Presenting participants with the first prototype of our created chatbot allowed them to engage with it firsthand. This allowed us to gain feedback on usability, credibility and trustworthiness. Furthermore, the interviews provided an opportunity to gather broader user opinions on the chatbot’s interface, tone and structure. The interviews were conducted to achieve two primary objectives. First of all, exploring the subjective experiences of users with the chatbot. Second, assessing user’s subjective opinions about the chatbot’s interface, conversational tone and overall structure of the chatbot.


==== Interview guide ====
The interview guide includes core questions that ensure consistency across interviews, while allowing flexibility for follow-up questions based on participants' responses.
'''''Scenario a – participant interacts individually with the chatbot'''''
Introduction and consent
* Briefly introduce the research project and purpose of the interview
* Highlight that participation is voluntary, responses are anonymous, participants can stop at any time without giving a reason and that they don’t have to answer a question if they don’t want to.
Participant background
* Can you tell me a bit about yourself? (year of study, program, etc.)
* Do you have any experience with mental health support devices/apps/websites?
* Have you ever used any mental health resources at TU/e?
''Let the user interact with the chatbot for about 10 minutes.''
Chatbot experience and usability
* What was your first impression of the chatbot?
* How easy or difficult was it to use?
* Did you feel comfortable sharing personal feelings/thoughts with the chatbot? Why or why not?
* Did you encounter any issues or frustrations while using the chatbot?
Empathy and conversational style
* How did the chatbot’s responses feel to you – empathetic, robotic, neutral? Can you give an example?
* Did the chatbot acknowledge emotions in a way that felt meaningful to you? If not, what was missing? If yes, in what ways did the chatbot do this?
* How natural did the chatbot’s responses feel to you?
* What could be improved in the way the chatbot communicates?
Functionality and user needs
* What features did you find most useful?
* What features were missing or could be improved?
* The chatbot is designed to serve three main functions:
# Providing self-help coping strategies
# Referring students to professional support
# Supporting students on mental health waiting lists
## Which of these is most important to you? Why?
* What additional functionality would make you more likely to use the chatbot?
Trust and credibility
* Would it help if the chatbot explicitly referenced psychological theories?
** Would you find it useful if the chatbot provided sources or explanations for its advice? Why or why not?
* What are some red flags that would make you doubt the chatbot’s credibility?
Privacy and data security
* Are there any privacy-related concerns that would stop you from using the chatbot entirely?
Acceptability and future use
* Are there any barriers that would prevent you from using it?
* Do you think an AI chatbot like this could be a valuable tool for TU/e students? Why or why not?


The interview guide was developed to provide the interviewers with structured yet flexible questions to conduct the interviews. Since multiple interviewers were involved in the process of conducting the interviews, it was crucial to create somewhat structured questions to ensure consistency throughout the collected data. That’s why we opted for a semi-structured guide. While the guide provided structured questions, it also allowed room for participants to elaborate on their thoughts and experiences. The intention was to keep the interviews as natural and open-ended as possible, hence almost all questions were phrased open-ended.


Closing


* Is there anything else you’d like to share?
The interview questions were developed based on key themes that emerged from the analysis of the initial survey. The following topics formed the core structure of the interviews:
* Thank you for participating!
 
 
'''''Scenario b – participant observes a pre-made interaction with the chatbot'''''
 
Introduction and consent
 
* Briefly introduce the research project and purpose of the interview
* Highlight that participation is voluntary, responses are anonymous, participants can stop at any time without giving a reason and they don’t have to answer all questions if they feel uncomfortable.
 
 
Participant background
 
* Can you tell me a bit about yourself? (year of study, program, etc.)
* Do you have any experience with mental health support devices/apps/websites?
* Have you ever used any mental health resources at TU/e?
 
 
''Show the user the example conversation and give time to read through it.''
 
 
Chatbot experience and usability
 
* Based on the example conversation, what were your first impressions/thoughts of the chatbot?
* If you were the user in the conversation, do you think you would feel comfortable engaging and sharing with the chatbot? Why or why not?
 
 
Empathy and conversational style
 
* Based on the example conversation, do you feel the chatbot responded in an empathetic and supportive way? Why or why not?
* Did the chatbot acknowledge emotions affectively? If not, what could be improved? If yes in what ways?
* How natural did the chatbot’s responses seem? Do they seem empathetic?
* If you had been the user in this interaction, what would you have wanted the chatbot to say to do differently?


* Chatbot experience and usability


Functionality and user needs
* Empathy and conversational style


* Based on what you saw, which chatbot features seemed most useful?
* Functionality and user needs
* Which features felt missing or could be improved?
* The chatbot is designed to serve three main functions:


# Providing self-help coping strategies
* Trust and credibility
# Referring students to professional support
# Supporting students on mental health waiting lists
## Which of these is most important to you? Why?


* What additional functionality would make you more likely to use the chatbot?
* Privacy and data security


* Acceptability and future use


Trust and credibility
The first topic was started after the user had interacted with the chatbot/read the manuscript and covered questions about the first impressions of the chatbot. Participants were asked what they found easy and difficult, if they felt comfortable sharing personal feelings and thoughts with the chatbot and whether they encountered any issues or frustration while using the chatbot/reading the manuscript. The second topic revolved around empathy and the chatbot’s conversational style. We asked participants if they felt the chatbot’s responses were empathetic, how the chatbot acknowledge their emotions and if this was done in an effective way, how natural the chatbot’s responses seemed and if they would change anything to the interaction if possible. Next, participants were asked about the chatbot’s main features, what features might be missing or how they could be improved, which one of the three functions they found most useful and if they would add any additional functionalities. Participants were then asked about trust and credibility levels, including privacy and data security concerns. The last topic revolved around acceptability and future use, we asked about potential barriers that could prevent using the chatbot and opinions about if it would be a valuable tool for TU/e students.


* Would it help if the chatbot explicitly referenced psychological theories?
* Would you find it useful if the chatbot provided sources or explanations for its advice? Why or why not?
* What are some red flags that would make you doubt the chatbot’s credibility?


All interviews followed the same structured format to ensure consistency across all sessions. First an introduction was given in which the research project was briefly introduced, the purpose of the interview was explained and it, was once again, highlighted to participants that their participation is completely voluntary. Basic demographic questions were asked to understand participation context. After, participants were given instructions on how to interact with the chatbot/read the manuscript. After interacting with the chatbot/reading manuscript, the core topics were discussed in a flexible, conversational manner, allowing participants to share their thoughts freely. At the end of the interview, participants were asked if they had anything additional insights and were then thanked for their time and contribution.


Privacy and data security


* Are there any privacy-related concerns that would stop you from using the chatbot entirely?
Each question in the interview guide was carefully designed to align with the overarching research question. Each main topic was tried to give equal amount of time and depth. By structuring the questions in this manner, we ensured that the collected data directly contributed to answer our research objectives.




Acceptability and future use
Measures taken to maintain ethical integrity included:


* Are there any barriers that would prevent you from using it?
* Informed consent – participants were provided with consent forms before the interview, outlining the purpose of the study, their rights and privacy information. By providing the participants with the informed consent in advance of the interview, we allowed them to read it in their own time, on their own pace.
* Do you think an AI chatbot like this could be a valuable tool for TU/e students? Why or why not?


* Anonymity – all responses were anonymized to ensure confidentiality. After the research project, all participant data will be removed from personal devices.


Closing
* Participants were reminded before and during the interview that they could withdraw from the study at any point without any repercussions.


* Is there anything else you’d like to share?
* Thank you for participating!<br />


== Research ethics / concerns ==
== Research ethics / concerns ==
Line 911: Line 803:
=== Survey ===
=== Survey ===
[[File:SurveyA.png|left|thumb|531x531px|https://forms.office.com/Pages/DesignPageV2.aspx?subpage=design&id=R_J9zM5gD0qddXBM9g78ZDW0_riAawFCqBP6WIQuqO5UMjNKVk9MSDI4WkoyODJKSUkyN1M1T1ZGUC4u&analysis=false&tab=0]]
[[File:SurveyA.png|left|thumb|531x531px|https://forms.office.com/Pages/DesignPageV2.aspx?subpage=design&id=R_J9zM5gD0qddXBM9g78ZDW0_riAawFCqBP6WIQuqO5UMjNKVk9MSDI4WkoyODJKSUkyN1M1T1ZGUC4u&analysis=false&tab=0]]
=== Interview guide ===
The interview guide includes core questions that ensure consistency across interviews, while allowing flexibility for follow-up questions based on participants' responses.
'''''Scenario a – participant interacts individually with the chatbot'''''
Introduction and consent
* Briefly introduce the research project and purpose of the interview
* Highlight that participation is voluntary, responses are anonymous, participants can stop at any time without giving a reason and that they don’t have to answer a question if they don’t want to.
Participant background
* Can you tell me a bit about yourself? (year of study, program, etc.)
* Do you have any experience with mental health support devices/apps/websites?
* Have you ever used any mental health resources at TU/e?
''Let the user interact with the chatbot for about 10 minutes.''
Chatbot experience and usability
* What was your first impression of the chatbot?
* How easy or difficult was it to use?
* Did you feel comfortable sharing personal feelings/thoughts with the chatbot? Why or why not?
* Did you encounter any issues or frustrations while using the chatbot?
Empathy and conversational style
* How did the chatbot’s responses feel to you – empathetic, robotic, neutral? Can you give an example?
* Did the chatbot acknowledge emotions in a way that felt meaningful to you? If not, what was missing? If yes, in what ways did the chatbot do this?
* How natural did the chatbot’s responses feel to you?
* What could be improved in the way the chatbot communicates?
Functionality and user needs
* What features did you find most useful?
* What features were missing or could be improved?
* The chatbot is designed to serve three main functions:
# Providing self-help coping strategies
# Referring students to professional support
# Supporting students on mental health waiting lists
## Which of these is most important to you? Why?
* What additional functionality would make you more likely to use the chatbot?
Trust and credibility
* Would it help if the chatbot explicitly referenced psychological theories?
** Would you find it useful if the chatbot provided sources or explanations for its advice? Why or why not?
* What are some red flags that would make you doubt the chatbot’s credibility?
Privacy and data security
* Are there any privacy-related concerns that would stop you from using the chatbot entirely?
Acceptability and future use
* Are there any barriers that would prevent you from using it?
* Do you think an AI chatbot like this could be a valuable tool for TU/e students? Why or why not?
Closing
* Is there anything else you’d like to share?
* Thank you for participating!
'''''Scenario b – participant observes a pre-made interaction with the chatbot'''''
Introduction and consent
* Briefly introduce the research project and purpose of the interview
* Highlight that participation is voluntary, responses are anonymous, participants can stop at any time without giving a reason and they don’t have to answer all questions if they feel uncomfortable.
Participant background
* Can you tell me a bit about yourself? (year of study, program, etc.)
* Do you have any experience with mental health support devices/apps/websites?
* Have you ever used any mental health resources at TU/e?
''Show the user the example conversation and give time to read through it.''
Chatbot experience and usability
* Based on the example conversation, what were your first impressions/thoughts of the chatbot?
* If you were the user in the conversation, do you think you would feel comfortable engaging and sharing with the chatbot? Why or why not?
Empathy and conversational style
* Based on the example conversation, do you feel the chatbot responded in an empathetic and supportive way? Why or why not?
* Did the chatbot acknowledge emotions affectively? If not, what could be improved? If yes in what ways?
* How natural did the chatbot’s responses seem? Do they seem empathetic?
* If you had been the user in this interaction, what would you have wanted the chatbot to say to do differently?
Functionality and user needs
* Based on what you saw, which chatbot features seemed most useful?
* Which features felt missing or could be improved?
* The chatbot is designed to serve three main functions:
# Providing self-help coping strategies
# Referring students to professional support
# Supporting students on mental health waiting lists
## Which of these is most important to you? Why?
* What additional functionality would make you more likely to use the chatbot?
Trust and credibility
* Would it help if the chatbot explicitly referenced psychological theories?
* Would you find it useful if the chatbot provided sources or explanations for its advice? Why or why not?
* What are some red flags that would make you doubt the chatbot’s credibility?
Privacy and data security
* Are there any privacy-related concerns that would stop you from using the chatbot entirely?
Acceptability and future use
* Are there any barriers that would prevent you from using it?
* Do you think an AI chatbot like this could be a valuable tool for TU/e students? Why or why not?
Closing
* Is there anything else you’d like to share?
* Thank you for participating!<br />

Revision as of 20:53, 12 March 2025

Practical Information

Names Student number Mail
Bridget Ariese 1670115 b.ariese@student.tue.nl
Sophie de Swart 2047470 s.a.m.d.swart@student.tue.nl
Mila van Bokhoven 1754238 m.m.v.bokhoven1@student.tue.nl
Marie Bellemakers 1739530 m.a.a.bellemakers@student.tue.nl
Maarten van der Loo 1639439 m.g.a.v.d.loo@student.tue.nl
Bram van der Heijden 1448137 b.v.d.heijden1@student.tue.nl

Logbook

Name Week 1 Week 2 Week 3 Week 4 Total time spent (hours)
Bridget lecture + group meeting+State of the art (6 art) Group meeting + tutor meeting + two user cases Group meeting + tutor meeting+ find participants+ ERB forms+ consent form+ ethical considerations Finalize Intro/Problem/Lit review 6+6+8+8
Sophie Lecture + group meeting + state of the art (3/4 art) + writing the problem statement Group meeting + tutor meeting + specify deliverables and approach +

make planning concise

Group meeting + tutor meeting + find participants + write interview guide Group/Tutor Meeting + Finalize interview guide 8 + 8
Mila Group meeting + State of the art (3/4 art) + Users, Milestones, Deliverables + task Division Group/Tutor Meeting + Create, improve and send out survey Group/Tutor meeting + find participants + analyse survey results + write continued approach Group/Tutor Meeting + ChatGPT handbook, diagnostics, decision tree 6 + 7 + 9 + 4
Marie Group meeting + State of the art (3/4 art) + Approach Group/tutor meeting + Examine state of the art sources and write corresponding wiki part Group meeting + tutor meeting + find participants + fill out ERB-forms, correspond with ERB, revise ERB- and consent forms + section on chatgpt concerns Group/Tutor Meeting + Finalize Intro/Problem/Lit review 4 + .. + 8
Maarten State of the art (6 art) Research how to train a GPT? + start making one Group meeting + tutor meeting ChatGPT train from thechniques 10H
Bram State of the art (3/4 art) + User Requirements Write up problem statement Group meeting + tutor meeting Group/Tutor Meeting + ChatGPT techniques used and literature
Name Week 5 Week 6 Week 7 Week 8 Total time spent
Bridget
Sophie
Mila
Marie
Maarten
Bram

After the tutor meeting on Monday, we’ll meet as a group to go over the tasks for the coming weeks. Based on any feedback or new insights from the meeting, we’ll divide the work in a way that makes sense for everyone. Each week, we’ll check in, see how things are going, and adjust if needed to keep things on track. This way, we make sure the workload is shared fairly and everything gets done on time.


Approach and timeline

Each week, there will be a meeting with the tutors on Monday morning to ask questions and get feedback. Furthermore, we meet as a group before and after each tutor meeting to prepare the meeting, evaluate the feedback and discuss and make a new plan and task division for the upcoming week.

On a weekly basis, we will evaluate what tasks need to be done and assign the tasks according to skills and interests of the group members.

We will start the process by finding a topic, doing literature research and exploring the options for our self created chatbot. After this process is complete, we will start by exploring potential user needs through a simple and short Microsoft Forms survey. We aim to have around 50 responses to be able to draw some insights from it. We will then create a chatbot which incorporates the findings from the survey as well as the findings found in existing literature. At the same time we will create an interview guide to be able to evaluate our prototype with users that fall within our target group. After having conducted the interviews, we will perform a thematic analysis to find themes and sub themes and formulate improvements for our chatbot. The final step is to make changes to the design of the chatbot accordingly to the findings of the interviews.

Deliverables

The wiki will be updated at the latest by Friday afternoon which is our weekly deliverable.

The final deliverable is the final presentation together with the final report. The report consists of our entire process, including all intermediate steps with explanation and justification, as well as theoretical background information, conclusions, discussion and implications about the findings for future research.

Milestones

Gantt chart showing the overall planning

Week 1

  • Brainstorming topics and pick one
  • Communicate topic and group to course coordinator
  • Start the problem statement
  • Conduct literature review (5 articles pp)
  • Formulate planning, deliverables and approach
  • Users – user requirements
  • Identify possibilities technical aspect

Week 2

  • Improving and writing the problem statement, including the following:
  1. What is exactly what we want to research?
  2. How are we going to research this?
  3. Context and background
  4. Which problem are we addressing?
  5. Importance and significance
  6. Research question/research aim (problem definition)
  7. Current gaps or limitations
  8. Desired outcomes – what is the goal of this study?
  9. Scope and constraints – who is our target group?
  • Formulating questions for the survey and setting the survey up – this survey should include questions/information that will help up us determine what we should and should not include in our chatbot, additional to the insights found in literature.
  1. Demographic information
  2. Short questions about levels/experiences of loneliness and stress
  3. Short questions about the current use of AI – what type of resources do students already use supporting possible mental health problems such as loneliness and/or stress?  
  • Sending out the survey (we aim for at least 50 participants for this survey)
  • Formulating two user cases – one for loneliness and one for stress. These user cases should include:
  1. Clear and concise user requirements
  2. A description of our user target group – who are they? What do they value? What does their daily life look like?
  3. Problems they might have related to our topic
    1. Reliability?
    2. Ethical concerns? Privacy?
    3. How useful are such platforms?
  • Exploring and working out the possibilities/challenges in how to develop the chatbot/AI.
  1. Which chatbots do already exist? How do they work? Is there any literature about the user experience of any of these platforms?
  2. How does it technically work to create a chatbot?      
    1. What do we need for it? Knowledge? Applications/platforms?
    2. How much time will it take?
    3. What might be some challenges that we could encounter?
    4. How and in what way do we need to create the manual?
  • Updating the state of the art, including:
  1. Which chatbots do already exist? How do they work? Is there any literature about the user experience of any of these platforms?
  2. Overview of psychological theories that could apply
  3. What research has been done relating to our topic? What of this can we use? How?
  • A clear and comprehensive planning of the rest of the quartile. Including clear and concise points what to do per week and all intermediate steps.

Week 3

  • Starting to find participants for the interviews
  1. How many participants?
  2. Determine inclusion and exclusion criteria
  • Process and analyze survey data –
  1. Create a list of prioritized and pain points and user needs – cross this to literature findings
  2. Map user pain points to potential chatbot features
  3. Where do the users experience problems? How might we be able to solve these within our design?
  • Writing the manual for the chatbot – this should incorporate the literature findings found in week 2 and should be separate for loneliness and stress.
  1. Translate theoretical insights into practical chatbot dialogues
  2. Research on what good language is for an AI
  • ?? ERB forms and sending those to the ethical commission
  • Making consent forms for the interviews
  • Exploring and working out the ethical considerations
  • Working on the design of the chatbot
  1. Defining chatbot objectives – what are the key functionalities?
  2. Map out conversation trees for different user scenarios
  3. Incorporate feedback loops
  • If time and possible – start working on interview guide to have more space and time for conducting the interviews.

Week 4

  • Finalizing chatbot (COMPLETE BEFORE WEDNESDAY) –
  1. Finalizing design and features
  2. Testing and debugging
  • Formulating interview guide (COMPLETE BEFORE WEDNESDAY)
  1. Develop a semi-structured guide – balance open-ended questions to encourage deeper insights, organize questions into key themes.
  • Finalizing participants –
  1. Ensure participants fit user category criteria
  2. Informed consent – provide clear explanations of the study purpose, data use and confidentiality in advance to the interview.
  3. Scheduling and logistics – finalize interview schedules, locations and materials
  4. Prepare a backup plan in case of lots of dropouts
  • Write introduction – use problem statement as starting point and include it in the introduction
  1. Provide context, research motivation and objectives.
  2. Short overview approach
  • Write section about survey, interview questions, chatbot manual and approach
  1. Survey structure, explain questions, and findings (using graphs?)
  2. Interview structure, explain interview guide and rationale for chosen approach
    1. Explain ethical considerations and study design
    2. Elaborate on the informed consent and ERB form
  • Start conducting interviews

Week 5

  • Conducting the last interviews (ideally done before Wednesday)
  1. Ensure a quiet, neutral space (if real-life)
  2. Securely record interviews (using Microsoft?)
  • Start to process and analyze the interviews
  1. Transcription – clean version
  2. Thematic analysis, steps:
    1. Familiarization with the data – reading and rereading through transcripts to identify key patterns, note initial impressions and recurring topics.
    2. Coding the data – descriptive (basic concepts) and interpretative codes (meanings behind responses)
    3. Identifying themes and subthemes – group similar codes into broader themes
    4. Reviewing and refining themes – check for coherence, ensure each theme is distinct and meaningful
    5. Defining and naming themes – assign clear and concise names that reflect core purpose.
  • Start writing findings section – organize interview insights by theme
  1. Present each theme with supporting participant quotes
  2. Discuss variations in responses (different user demographics?)

Week 6

  • Completing thematic analysis – clearly highlighting what can/should be changed in the design of the chatbot.
  1. Cross-checking and validating themes
  2. Extracting key insights for chatbot improvement
    1. Highlight what should be changed or optimized in the chatbot design
    2. Categorize findings into urgent, optional and future improvements, depending on time constraints.
  • Finish writing findings section
  • Write results section
  1. Present main findings concisely with visual aids (tables, graphs, theme maps, etc.)
  • Write discussion section
  1. Compare findings of the interviews to existing literature and to the results of the survey study
  2. Address unexpected insights and research limitations
  • Write conclusion section
  1. Summarize key takeaways and propose future research possibilities
  • Updating the chatbot based on insights of the interviews

Week 7

Space for potential catch-up work

  • Finish and complete chatbot
  • Finalize report
  • Prepare presentation
  • Give final presentation
  • Fill in peer review


Report

Problem statement

In this project we will be researching mental health challenges, specifically focusing on stress and loneliness, and exploring how social robots and Artificial Intelligence (AI) could assist people with such problems. Mental health concerns are on the rise, with stress and loneliness being particularly prevalent in today's society. Factors such as the rapid rise of social media channels and the increasing usage of technology in our everyday life contribute to higher levels of emotional distress. Additionally, loneliness is increasing in modern society, due to both societal changes and technological advancements. The increasing usage of social media and technology is replacing real-life interaction, creating superficial interactions that don’t fulfill deep emotional needs. Second, the shift to remote working and online learning means fewer face-to-face interactions, leading to weaker social bonds.

Seeking professional help can be a difficult step to take due to stigma, accessibility issues and financial constraints. There are long waiting times for psychologists, making it difficult for individuals to access professional help. This stigma and increase of stress and loneliness is especially apparent in age groups of young adolescents and the youth, who are also particularly vulnerable to stigma. Especially with the ever increasing load of study material the education system has to teach students and childeren, study related stress is becoming a larger problem by the day.

Many students struggling with mental health challenges such as loneliness and stress, often feel that their issues aren’t ‘serious enough’ to seek professional support, even though they might be in need. But even when it is serious enough to consult a professional, patients are going to have to wait a long time before actually getting therapy, as waiting lines have been getting significantly larger over the past years. Robots as well as Artificial Intelligence (AI) technologies might be the solution to bridge this gap, by offering accessible mental support that does not come with the same stigma as therapy. The largest benefit is mainly that AI therapy can be accessed at any time, anywhere.

This paper will focus on a literature review of using social robots and Large Language Models (LLMs) in supporting students and young adults with the rather common minor mental health issues of loneliness and stress. The study will start with reviewing the stigma, needs and expectations of users in regards to artificial intelligence in mental health issues and the current state of the art and its limitations. Based upon this information, a framework for a mental health LLM will be constructed, either in the form of a GPT or in the form of a guideline mental health GPT’s should follow. Finally a user study will be conducted to analyse the effectiveness of this proposed framework.


(Old problem statement: In this project we will be researching mental health challenges, specifically focusing on stress and loneliness, and exploring how social robots and Artificial Intelligence (AI) could assist people with such problems. Mental health concerns are on the rise, with stress and loneliness being particularly prevalent in today's society. Factors such as the rapid rise of social media channels and the increasing usage of technology in our everyday life contribute to higher levels of emotional distress. Additionally, loneliness is increasing in modern society, due to both societal changes and technological advancements. The increasing usage of social media and technology is replacing real-life interaction, creating superficial interactions that don’t fulfill deep emotional needs. Second, the shift to remote working and online learning means fewer face-to-face interactions, leading to weaker social bonds. Also, with the increasing life expectancy, there are more elderly people. Elderly people are at a higher risk for loneliness since they might live alone after losing partners, friends and family.

Seeking professional help can be a difficult step to take due to stigma, accessibility issues and financial constraints. There are long waiting times for psychologists, making it difficult for individuals to access professional help. Many students, as well as other individuals, struggling with mental health challenges such as loneliness, depression, anxiety or stress, often feel that their issues aren’t ‘serious enough’ to seek professional support, even though they might be in need of some help. Robots as well as Artificial Intelligence (AI) technologies might be solutions to bridge this gap between those who require help and the availability of mental health resources. In this project, we will specifically focus on the use of social robots and Large Language Models (LLMs) and their potential role in providing mental health support.

Beyond individual use, robots could be introduced in the therapeutic field, assisting professionals by monitoring patients' well-being over time, collecting data, or providing guided therapy sessions in structured environments. They could provide emotional support in a way that is easier accessible and cost-effective. However, this raises critical ethical considerations, particularly regarding data privacy and emotional dependence. Users may share sensitive personal experience with these robots or technological applications, raising concerns about how this data is stored and used. Additionally, there is the risk that individuals may form emotional attachments to AI-based companions.

Our research will mainly focus on conducting a literature review and gathering insight through qualitative or quantitative user studies to understand the needs and expectations of the users. Additionally, if it aligns with our research objectives, we may build and train some form of GPT as a prototype product to explore its feasibility in providing mental health support.

Through this project, we aim to explore the potential benefits, limitations and ethical considerations of integrating robots and LLMs into the mental health support system. By analyzing existing technologies, exploring user needs and potentially implementing existing limitations into new prototypes, we hope to find insights that are valuable to how robotics can positively impact mental well-being in an increasingly technology-driven world.)

Users

User Requirements

- Achieve personal/emotional progression on their mental health struggle.

- Get people to open up towards other people and talk about their issues to family or friends.

- Make people feel comfortable chatting/opening up towards the artificial intelligence.

- Handling user data with care for example not sharing or leaking personal data and asking for consent when collecting data.

Personas and User Cases

Our target users are students who are struggling with mental health challenges, specifically targeting loneliness and stress. The user focus is on those that either feel like their problems are not 'serious' enough to go to a therapist, have to wait a long time to see a therapist and need something to bridge the wait, or those that struggle to seek help and give them an easier alternative.

Ivory Peach Lilac Yellow Colorful Gradient Customer Persona Graph (1).png


Joshua just started his second year in applied physics. Last year was stressful for him with obtaining his BSA, and now that this pressure has decreased he knows he wants to enjoy his student life more. But he doesn’t know where to start. All his classmates of the same year have formed groups and friendships, and he starts feeling lonely. Its hard for him to go out of his comfort zone and go to any association alone. His insecurities make him feel more alone. Like he doesn’t have anywhere to go, which makes him isolate himself even more, adding to the somber moods.

He knows that this is also not what he wants, and wants to find something to help him. Its hard to admit this to something, hard to put it in words. Therapy would be a big step, and would take too long to even get an appointment with a therapist. He needs a solution that doesn’t feel like a big step and is easily accessible.


Ivory Peach Lilac Yellow Colorful Gradient Customer Persona Graph.png

Olivia, a 21 year-old Sustainable innovation student, has been very busy with her bachelor end project for the past few months, and it has often been very stressful and caused her to feel overwhelmed. She has always struggled with planning and asking for help, and this has especially been a factor for her stress during this project.

It is currently 13:00, and she has been working on her project for four hours today already, only taking a 15-minute break to quickly get some lunch. Olivia has to work tonight, so she has a bunch of tasks she wants to finish before dinner. Without really realizing it, she has been powering through her stress, working relentlessly on all kinds of things without really having a clear structure in mind, and has become quite overwhelmed.

With her busy schedule and strict budget, a therapist has been a non-explored option. Olivia has not grown up in an environment where stress and mental problems were discussed openly and respectfully, and has always struggled to ask for help with these problems. However last week, she found online help tool and used it a few times to help her calm down when things get too intense. On the screen, is an online AI therapist. This therAIpist made it easier for Olivia to accept that she needed help and look for it. She has found it to become increasingly easy for her to formulate her problems, and the additional stress of talking to someone about her problems have decreased. Olivia receives a way to talk about her problems and get advice.

When she is done explaining her problems to the AI tool, it applauds Olivia for taking care of herself, and asks her if she could use additional help of talking to a human therapist. Olivia realizes this would really help her, and decides to take further action in looking for help in an institution. In the time she would wait for an appointment she can make further use of the AI tool in situations where help is needed quickly and discreetly.




State-of-the-art

There are already examples of mental health chatbots and research has been done on these chatbots as well as on other AI-driven therapeutic technologies.


Woebot and Wysa are two existing AI chatbots that are designed to give mental health support by using therapeutic approaches like Cognitive Behavioral Therapy (CBT), mindfulness, and Dialectical Behavioral Therapy (DBT). These chatbots are available 24/7 and let users undergo self-guided therapy sessions.

Woebot invites users to monitor and manage their mood using tools such as mood tracking, progress reflection, gratitude journaling, and mindfulness practice. Woebot starts a conversation by asking the user how they’re feeling and, based on what the user shares, Woebot suggests tools and content to help them identify and manage their thoughts and emotions and offers techniques they can try to feel better.

( https://woebothealth.com/referral/ )

Wysa employs CBT, mindfulness techniques, and DBT strategies to help users navigate stress, anxiety, and depression. It has received positive feedback for fostering a trusting environment and providing real-time emotional support (Eltahawy et al., 2023).

Current research indicates chatbots like these can help reduce symptoms of anxiety and depression, but are not yet proven to be more effective than traditional methods like journaling, or as effective as human-led therapy (Eltahawy et al., 2023).


The promising side of AI therapy in general is underscored in articles such as 'Human-Human vs. Human-AI Therapy: An Empirical Study' and 'Enhancing University Students' Mental Health under Artificial Intelligence: Principles of Behaviour Therapy', where the first of these highlights the level of professionality found in AI-driven therapy conversations, and the second indicates the potential help that could be offered (and how) to university students specifically.


Other articles investigate AI-driven therapy in more physical forms. The article "Potential Applications of Social Robots in Robot-Assisted Interventions for Social Anxiety" for example, shows how social robots can act as coaches or therapy assistants and help users engage in social situations in a controlled environment. The study "Humanoid Robot Intervention vs. Treatment as Usual for Loneliness in Long-Term Care Homes" examines the effectiveness of humanoid robots in reducing loneliness among people in care homes, and found that the AI-driven robot helped reduce feelings of isolation and helped improve the users' mood.

Challenges

Several challenges for AI therapy chatbots are often mentioned in current research on the subject.

The first of these is that AI chatbots lack the emotional intelligence of humans; AI can simulate empathy but does not have the true emotional depth that humans have, which may make AI chatbots less effective when it comes to handling complex emotions (Kuhail et al., 2024).

A second often-mentioned challenge is that the use of AI in mental healthcare raises concerns regarding privacy (user data security) and ethics (Li et al., 2024).

Another challenge is that there is a risk at hand of users becoming over-reliant on AI chatbots instead of seeking out human help when needed (Eltahawy et al., 2023).

Lastly, another difficulty is the limited adaptability of AI chatbots; they cannot quite offer fully personalized therapy like a human therapist can.

Our own GPT

Typically, developing a chatbot requires extensive training on data sets, refining models and implementing natural language processing (NLP) techniques. This process includes a vast amount of data collection, training and updating of the natural language understanding (NLU).

However, with OpenAI's “Create Your Own GPT,” much of this technical work is abstracted away. Instead of training a model from scratch, this tool allows users to customize an already trained GPT-4 model through instructions, behavioral settings and uploaded knowledge bases. So without the need for coding or AI expertise it enables users to create a tailored AI assistant like for example our mental health chatbot.

The things that needs to be done for our gpt training:

  1. Behavior and design: Knowing how to guide conversations effectively, ensuring responses are matching with the desires of our users (emphathetic, ethical, engaging)
  2. User centric thinking: Defining the needs of students seeking mental health support and structuring conversations accordingly.
  3. Prompt engineering: Determining how the gpt needs to respond (so less solution but more personal focused with asking questions).
  4. Testing.


Our current GPT:

https://chatgpt.com/g/g-67b84edc9194819182a10a0dff7371c5-your-mental-health-chat-partner

Research

Initial Survey

Purpose and Methodology

To gain a deeper understanding of student attitudes toward AI mental health support, we conducted a survey focusing on stress and loneliness among students. The objective was to explore how students currently manage these challenges, their willingness to use AI-based support, and the key barriers that might prevent them from engaging with such tools.

The survey specifically targeted students who experience mental health struggles but do not perceive their issues as severe enough to seek professional therapy, those facing long waiting times for professional support, and individuals who find it difficult to ask for help. By gathering insights from this demographic, we aimed to identify pain points, assess trust levels in AI-driven psychological tools, and determine how a chatbot could be designed to effectively address students’ needs while complementing existing mental health services.

Results

The survey was completed by 50 respondents, of whom 40% were female, with nearly all participants falling within the 18-23 age range. The responses provided valuable insights into the prevalence of stress and loneliness among students, as well as their attitudes toward AI-driven mental health support.

Stress and Loneliness

The results indicate that stress is a more frequent issue than loneliness among students. While 36% of respondents reported feeling lonely sometimes and 8% often, stress levels were significantly higher, with 28% sometimes feeling stressed, 54% often experiencing stress, and 2% reporting that they always feel stressed.

Image a.png

When asked about the primary causes of stress, students most frequently cited:

  • Exams and deadlines
  • Balancing university with other responsibilities
  • High academic workload

For loneliness, the key contributing factors included:

  • Spending excessive time studying
  • Feeling disconnected from classmates or university life

To cope with these feelings, students employed various strategies. The most common methods included exercising, reaching out to friends and family, and engaging in entertainment activities such as watching movies, gaming, or reading.

Trust and Willingness to use AI Chatbot
Image x.png

One of the most striking findings from the survey is the low level of trust in AI for mental health support. When asked to rate their trust in an AI chatbot’s ability to provide reliable psychological advice on a scale of 0 to 10, the average trust score was 3.88, with a median of 4. This suggests that, while some students recognize potential benefits, a significant portion remains skeptical about whether AI can truly understand and assist with personal struggles.

In terms of willingness to engage with an AI chatbot, the responses were mixed:

  • 24 students (49%) stated they would not use an AI chatbot
  • 16 students (33%) were unsure, selecting “Maybe”
  • 7 students (14%) said they would only consider using it if human help was unavailable (e.g., due to long waiting times)
  • Only 2 students (4%) expressed strong enthusiasm for the idea

Although a considerable number of respondents remained resistant, nearly half of the students expressed some level of openness to using an AI tool under the right conditions.

Concerns and Appeals

The survey revealed several key concerns that deter students from using an AI chatbot for mental health support. The most frequently mentioned barriers were:

Image y.png
  • A strong preference for human interaction – 30 respondents stated they would rather talk to a human than an AI.
  • Distrust in AI’s ability to provide meaningful support – 20 students were skeptical about AI’s capability in handling sensitive mental health conversations, fearing the responses would be impersonal or inadequate.
  • Doubt that AI can truly understand emotions – 15 respondents felt that AI lacks the emotional depth needed for meaningful interaction.
  • Uncertainty about AI’s effectiveness – 15 respondents questioned whether AI could actually provide real help for mental health concerns.
Image z.png

Despite these concerns, students identified several features that could make an AI chatbot more attractive for mental health support:

  • Anonymity – 20 students highlighted the importance of privacy, indicating they would be more willing to use a chatbot if they could remain anonymous.
  • Evidence-based advice – 17 respondents expressed interest in a chatbot that provides guidance based on scientifically validated psychological techniques.
  • 24/7 availability – 14 students valued the ability to access support at any time, particularly in moments of distress.
The Role of Universities in Mental Health Support

A noteworthy finding from the survey is that more than 40% of respondents had either sought professional help for stress or loneliness or had wanted to but did not actively pursue it. This suggests that many students recognize their struggles but face barriers in seeking support.

Furthermore, when asked whether universities should provide more accessible mental health support for students, responses indicated significant demand for such initiatives:

  • 60% of respondents agreed that more accessible support should be available.
  • 32% were unsure.
  • Only 8% felt that additional mental health support was unnecessary.

These findings highlight the need for universities to explore alternative mental health support options, including AI-based tools, to address gaps in accessibility and availability.

Discussion and Implications for AI Chatbot Design

The survey results underscore the challenges and opportunities in designing an AI chatbot for mental health support. The most pressing issue is the low trust in AI-generated psychological advice. Many students remain skeptical of AI’s ability to provide meaningful guidance, and the chatbot must actively work to establish credibility. One way to address this is by ensuring that all responses are based on scientifically validated psychological techniques. By referencing established methods such as Cognitive Behavioral Therapy (CBT) and mindfulness-based strategies, the chatbot can reinforce that its recommendations are grounded in evidence rather than generic advice. Including explanations or citations for psychological principles could further increase trust.

Another critical aspect is ensuring that the chatbot’s tone and conversational style feel natural and empathetic. The most common concern among respondents was the preference for human interaction, meaning the chatbot must be designed to acknowledge users’ emotions and offer responses that feel supportive rather than robotic. While AI cannot replace human therapists, it can be trained to respond with warmth and understanding, using conversational techniques that mimic human empathy. A key design feature should be adaptive responses based on sentiment analysis, allowing the chatbot to adjust its tone depending on the user’s emotional state.

Given that privacy concerns were a recurring theme, transparency in data handling will be essential. Before engaging with the chatbot, users should be explicitly informed that their conversations are anonymous and that no identifiable data will be stored. This reassurance could help mitigate fears surrounding data security and encourage more students to engage with the tool.

The survey also highlights the need for different chatbot functionalities to cater to varying student needs. Some students primarily need self-help strategies to manage stress and loneliness independently, while others require a referral system to guide them toward professional help. Another group of students, particularly those on mental health waiting lists, need interim support until they can see a therapist. To address these different needs, the chatbot should be designed with three core functions:

  1. Providing psychological support and coping strategies
    • The chatbot will offer evidence-based techniques for managing stress and loneliness.
    • It will emphasize anonymity and create a non-judgmental space for users to express their concerns.
  2. Referring students to professional help and university support services
    • Users who prefer human interaction will be directed to mental health professionals at TU/e.
    • The chatbot will provide information on how to access university support resources.
  3. Supporting students on waiting lists for professional help
    • While students wait for therapy, TU/e's mental health professionals can refer to the chatbot, which will temporary offer guidance to help them cope in the meantime.
    • The tool will clarify that it is not a substitute for therapy but can provide immediate relief strategies.

To ensure the chatbot meets these objectives, further prototype testing will be necessary. A small-scale user trial will be conducted to gather qualitative feedback on conversational flow, response accuracy, and overall effectiveness. Additionally, the chatbot’s ability to detect and adapt to different emotional states will also be evaluated to refine its responsiveness.

The findings from this survey highlight both the limitations and possibilities of AI-driven mental health support. While trust remains a significant barrier, the potential for accessible, anonymous, and always-available support should not be underestimated. By designing a chatbot that prioritizes credibility, privacy, and adaptability, we can create a tool that helps students manage stress and loneliness while complementing existing mental health services. As we move forward, user feedback and iterative development will be crucial in shaping a system that students find genuinely useful.

Interviews

Conducting interviews was a vital aspect of our research as they facilitate the collection of qualitative data, which was is important for understanding subjective experiences of users. Collecting qualitative data about user experiences and thoughts would capture in-depth insights. Presenting participants with the first prototype of our created chatbot allowed them to engage with it firsthand. This allowed us to gain feedback on usability, credibility and trustworthiness. Furthermore, the interviews provided an opportunity to gather broader user opinions on the chatbot’s interface, tone and structure. The interviews were conducted to achieve two primary objectives. First of all, exploring the subjective experiences of users with the chatbot. Second, assessing user’s subjective opinions about the chatbot’s interface, conversational tone and overall structure of the chatbot.


The interview guide was developed to provide the interviewers with structured yet flexible questions to conduct the interviews. Since multiple interviewers were involved in the process of conducting the interviews, it was crucial to create somewhat structured questions to ensure consistency throughout the collected data. That’s why we opted for a semi-structured guide. While the guide provided structured questions, it also allowed room for participants to elaborate on their thoughts and experiences. The intention was to keep the interviews as natural and open-ended as possible, hence almost all questions were phrased open-ended.


The interview questions were developed based on key themes that emerged from the analysis of the initial survey. The following topics formed the core structure of the interviews:

  • Chatbot experience and usability
  • Empathy and conversational style
  • Functionality and user needs
  • Trust and credibility
  • Privacy and data security
  • Acceptability and future use

The first topic was started after the user had interacted with the chatbot/read the manuscript and covered questions about the first impressions of the chatbot. Participants were asked what they found easy and difficult, if they felt comfortable sharing personal feelings and thoughts with the chatbot and whether they encountered any issues or frustration while using the chatbot/reading the manuscript. The second topic revolved around empathy and the chatbot’s conversational style. We asked participants if they felt the chatbot’s responses were empathetic, how the chatbot acknowledge their emotions and if this was done in an effective way, how natural the chatbot’s responses seemed and if they would change anything to the interaction if possible. Next, participants were asked about the chatbot’s main features, what features might be missing or how they could be improved, which one of the three functions they found most useful and if they would add any additional functionalities. Participants were then asked about trust and credibility levels, including privacy and data security concerns. The last topic revolved around acceptability and future use, we asked about potential barriers that could prevent using the chatbot and opinions about if it would be a valuable tool for TU/e students.


All interviews followed the same structured format to ensure consistency across all sessions. First an introduction was given in which the research project was briefly introduced, the purpose of the interview was explained and it, was once again, highlighted to participants that their participation is completely voluntary. Basic demographic questions were asked to understand participation context. After, participants were given instructions on how to interact with the chatbot/read the manuscript. After interacting with the chatbot/reading manuscript, the core topics were discussed in a flexible, conversational manner, allowing participants to share their thoughts freely. At the end of the interview, participants were asked if they had anything additional insights and were then thanked for their time and contribution.


Each question in the interview guide was carefully designed to align with the overarching research question. Each main topic was tried to give equal amount of time and depth. By structuring the questions in this manner, we ensured that the collected data directly contributed to answer our research objectives.


Measures taken to maintain ethical integrity included:

  • Informed consent – participants were provided with consent forms before the interview, outlining the purpose of the study, their rights and privacy information. By providing the participants with the informed consent in advance of the interview, we allowed them to read it in their own time, on their own pace.
  • Anonymity – all responses were anonymized to ensure confidentiality. After the research project, all participant data will be removed from personal devices.
  • Participants were reminded before and during the interview that they could withdraw from the study at any point without any repercussions.


Research ethics / concerns

Privacy:

Privacy is one of the concerns that comes along with chatbots in general, but with personal information like in therapy this concern grows, as already discussed in the analysis of the survey. Users may be hesitant to use the therapy chatbots because they are scared of data breaches, or the data simply being used for anything other than its purpose, like being sold for marketing reasons.

Any licensed therapist will have to sign and adhere to a confidentiality agreement, which states that the therapist will not share the vulnerable information of the patients anywhere but appropriate. For AI this is more difficult. Data will somehow have to be saved and collected for the chatbot to become smarter and learn more about the patient.

Privacy contains multiple concerns, including identity disclosure. Where the most important thing is that any of the collected data should not in any way be traceable to the patient. It corresponds to the notion of anonymity.

There are also concerns of attribute disclosure and membership disclosure, going beyond anonymity. Where if sensitive information would be available to or found by others, this can be linked to the patients, even if anonymous, and the data can be further used to make assumptions about these patients.

Because the chatbot for this project is made by creating and training a GPT, privacy concerns arise: by using the chatbot, data on personal topics and experiences is fed into chatGPT. While building a chatbot ground-up to fully avoid these concerns is unfortunately out of scope for this course, actions can be taken to mitigate the privacy concerns. One such measure is to ensure private data such as names and adresses are both not fed into the tool (by warning users) and not asked for by the tool (by training the GPT). Another, which will be done to protect research participants, is to ensure testers of the chatbot do so on an account provided by the research team, and not on their personal account.

Deception:

The advancement of sophisticated language models has resulted in chatbots that are not only functional but also remarkably persuasive. This creates a situation where the distinction between human and machine blurs, and users can easily forget they are interacting with an algorithm. This is where the core of the deception problem lies.

The discussion surrounding deception in technology has been going on for a while and has many dimensions. What precisely constitutes deception? Is it simply concealing the true nature of an entity, or is there more to it? Is intent relevant? If a chatbot is designed to assist but inadvertently misleads, does that still qualify as deception? Is deception always negative? In certain contexts, such as specific forms of entertainment, a degree of deception might be considered acceptable.

The risk of deception is particularly pronounced among vulnerable user groups, such as the elderly, children, or in the case of the therapy chatbot, individuals with mental health conditions. These groups may be less critical and more susceptible to the persuasive language of chatbots

The use of chatbots in therapy shows a scenario where deception can have consequences. While developers' intentions are positive crucial considerations must be remembered, like the risk of false empathy: Chatbots can simulate empathetic responses, but they lack genuine understanding of human emotions. This can foster a false sense of security and trust in patients. The danger of over-reliance, vulnerable users may become overly dependent on a chatbot for emotional support, potentially leading to isolation from human interaction. The potential for misdiagnosis or incorrect advice, even with the best intentions, chatbots can provide inaccurate diagnoses or inappropriate advice, with serious implications for patient health.

To mitigate these risks, it's essential that users are consistently reminded of their interaction with an algorithm. This can be done by clear identification, regular reminders and education.


Trust:

‘One significant finding is that trust in a chatbot arises cognitively while trusting a human agent is affect-based.’ Can we trust a chatbot like a physician? A qualitative study on understanding the emergence of trust toward diagnostic chatbots - ScienceDirect

Trust in systems has to be taught, learned and thought about. It doesn’t just happen like with interaction of humans.

Even before the initial contact, trust in the systems plays an important role. In research and in our own survey we have found attitudes toward chatbots. Subjective norms, perceived risk, and beliefs about the system's structural environment may influence the willingness to use the systems and the subsequent trust-building process. Trust is more likely to arise when using the chat tool is socially legitimized, and users believe using such a system is embedded in a safe and well-structured environment.

People tend to trust other people. What is thus important for trust in the system is legitimation, not just socially, but also from professionals. Having the system being recommended by professionals increases the likelihood of trusting that the system is okay to use.

Responsibility and liability:

Another concern that arises in AI therapy is the responsibility and liability. If the patient using the chatbot does something that is morally or lawfully wrong, but is suggested by the chatbot, who would be in the wrong?

A human that has full agency over its actions would be responsible for these own actions. However, they have put their trust into this online tool, and this tool could potentially give them ‘wrong’ information. Depending on whether or not you regard the chattool to be its own agent, it is not responsible either. Since its choices and sayings are programmed for its use.Therefore, the responsibility also falls to the developers of the chattool.

A significant ethical concern in AI therapy centers on the distribution of responsibility and liability. When a patient, guided by an AI chatbot, engages in actions that are morally or legally questionable, the question of who is responsible arises. While individuals are typically accountable for their choices, the reliance on AI-generated advice complicates matters. It matters whether or not an AI chatbot can be considered an agent; given that AI operates on programmed algorithms and data, it lacks genuine autonomy, making it impossible to hold it morally responsible in the same way as a human. Consequently, a substantial portion of the responsibility shifts to the developers, who are accountable for the AI's programming, the data it's trained on, and the potential repercussions of its advice. Furthermore, when therapists incorporate AI as a tool, they also bear responsibilities, including understanding the AI's limitations and applying their professional judgment. The legal framework surrounding AI liability is still developing, necessitating clear guidelines and regulations to safeguard both patients and developers. In essence, AI therapy introduces a complex web of responsibility, where patient accountability is nuanced by the developers' and therapists' roles, and the legal system strives to keep pace with rapid technological advancements.


Empathy and professional boundaries

While chatbots can mimic empathetic responses, they lack the capacity for genuine human empathy. This distinction is crucial in therapy, where authentic emotional connection forms the foundation of healing. The human ability to create deep, empathetic bonds, to truly understand and share in another's emotional experience, remains beyond the reach of AI systems.

The difference lies in the nature of response, AI operates on learned patterns and algorithms, while a human therapist, even within ethical and professional guidelines, uses intuition, lived experience, and an understanding of human complexity. Though AI can process and apply moral frameworks, it cannot navigate the moral dilemmas faced by clients with the same level of judgment as a human.

Furthermore, human therapists can adapt in a way that AI cannot replicate. They can adjust their approach, responding shifts in a client's emotional state, and engage in spontaneous, intuitive interactions. Moreover, human therapists communicate through non-verbal cues like body language, facial expressions, and tone shifts, showing vulnerability and further strengthening trust in a way that a screen simply cannot. These human qualities are essential for creating a safe and supportive therapeutic environment, and they represent the value of human connection in the realm of mental health.

References

Enabling Open-Science Initiatives in Clinical Psychology and Psychiatry Without Sacrificing Patients’ Privacy: Current Practices and Future Challenges - Colin G. Walsh, Weiyi Xia, Muqun Li, Joshua C. Denny, Paul A. Harris, Bradley A. Malin, 2018


State of the art (25 articles): (3/4 pp, Maarten en Bridget 6)

Mila:

- Zhang, J., & Chen, T. (2025). Artificial intelligence based social robots in the process of student mental health diagnosis. Entertainment Computing, 52, 100799. https://doi.org/10.1016/j.entcom.2024.100799

- Eltahawy, L., Essig, T., Myszkowski, N., & Trub, L. (2023). Can robots do therapy?: Examining the efficacy of a CBT bot in comparison with other behavioral intervention technologies in alleviating mental health symptoms. Computers in Human Behavior: Artificial Humans, 2(1), 100035. https://doi.org/10.1016/j.chbah.2023.100035

- Jeong, S., Aymerich-Franch, L., Arias, K. et al. Deploying a robotic positive psychology coach to improve college students’ psychological well-being. User Model User-Adap Inter 33, 571–615 (2023). https://doi-org.dianus.libr.tue.nl/10.1007/s11257-022-09337-8

- Edwards, A., Edwards, C., Abendschein, B., Espinosa, J., Scherger, J. and Vander Meer, P. (2022), "Using robot animal companions in the academic library to mitigate student stress", Library Hi Tech, Vol. 40 No. 4, pp. 878-893. https://doi.org/10.1108/LHT-07-2020-0148 Sophie:

Sophie:

Velastegui, D., Pérez, M. L. R., & Garcés, L. F. S. (2023). Impact of Artificial Intelligence on learning behaviors and psychological well-being of college students. Salud, Ciencia y Tecnologia-Serie de Conferencias, (2), 343.

This article assesses how interaction with technology affect college student’s well-being. Educational technology designers must integrate psychological theories and principles in the development of AI tools to minimize the risks of student’s mental well-being.


Lillywhite, B., & Wolbring, G. (2024). Auditing the impact of artificial intelligence on the ability to have a good life: Using well-being measures as a tool to investigate the views of undergraduate STEM students. AI & society, 39(3), 1427-1442.

This article investigates the impact of artificial intelligence on the ability to have a good life. They focus on students in the STEM majors. The authors found a set of questions that might be good starting points to develop an inventory of students’ perspectives on the implications of AI on the ability to have a good life.


Pittman, M., & Reich, B. (2016). Social media and loneliness: Why an Instagram picture may be worth more than a thousand Twitter words. Computers in human behavior, 62, 155-167.

This article examines if there is a difference between image-based social media use and text-based media use regarding loneliness. The results suggest that loneliness may decrease, while happiness and satisfaction with life may increase with the usage of image-based social media. Text-based media use appears ineffectual. The authors propose that this difference may be due to the fact that image-based social media offers enhanced intimacy.


O’Day, E. B., & Heimberg, R. G. (2021). Social media use, social anxiety, and loneliness: A systematic review. Computers in Human Behavior Reports, 3, 100070.

This article examines the broad aspects of social media use and its relation to social anxiety and loneliness. It provides a better understanding of how more socially anxious and lonely individuals use social media. Loneliness is a risk factor for problematic social media use, and social anxiety and loneliness both have the potential to put people at a risk of experiencing negative consequences as a result of their social media use. More research needs to be done to examine the causal relations.


Bridget:

Socially Assistive Robotics combined with Artificial Intelligence for ADHD. (2021, 9 januari). IEEE Conference Publication | IEEE Xplore. https://ieeexplore.ieee.org/abstract/document/9369633

This paper presents a patient-centered therapy approach using the Pepper humanoid robot to support children with attention deficit. Pepper integrates a tablet for interactive exercises and cameras to capture real-time emotional data, allowing for personalized therapeutic adjustments. The system, tested in collaboration with a diagnostic center, enhances children's engagement by providing a non-intimidating robotic intermediary.

BetterHelp - Get started & Sign-Up today. (z.d.). https://www.betterhelp.com/get-started/?go=true&utm_source=AdWords&utm_medium=Search_PPC_c&utm_term=betterhelp_e&utm_content=161518778316&network=g&placement=&target=&matchtype=e&utm_campaign=21223585199&ad_type=text&adposition=&kwd_id=kwd-300752210814&gad_source=1&gclid=CjwKCAiAzba9BhBhEiwA7glbasw7PG1fxUn6i-hXi0CPIHNrD_1VB3F6SF7OPBtBi8pw0-H0ntSblRoCEa8QAvD_BwE¬_found=1&gor=start

Better help is an online therapy platform that eases access to psychological help.

'Er zijn nog 80.000 wachtenden voor u' | Zorgvisie

Fiske, A., Henningsen, P., & Buyx, A. (2019). Your Robot Therapist Will See You Now: Ethical Implications of Embodied Artificial Intelligence in Psychiatry, Psychology, and Psychotherapy. Journal Of Medical Internet Research, 21(5), e13216. https://doi.org/10.2196/13216

This paper assesses the ethical and social implications of translating embodied AI applications into mental health care across the fields of Psychiatry, Psychology and Psychotherapy. Building on this analysis, it develops a set of preliminary recommendations on how to address ethical and social challenges in current and future applications of embodied AI.

Kuhail, M. A., Alturki, N., Thomas, J., Alkhalifa, A. K., & Alshardan, A. (2024). Human-Human vs Human-AI Therapy: An Empirical Study. International Journal Of Human-Computer Interaction, 1–12. https://doi.org/10.1080/10447318.2024.2385001

This study examines mental health professionals' perceptions of Pi, a relational AI chatbot, in early-stage psychotherapy. Therapists struggled to distinguish between human-AI and human-human therapy transcripts, correctly identifying them only 53.9% of the time, while rating AI transcripts as higher quality on average. These findings suggest that AI chatbots could play a supportive role in mental healthcare, particularly for initial problem exploration when therapist availability is limited.

Holohan, M., & Fiske, A. (2021). “Like I’m Talking to a Real Person”: Exploring the Meaning of Transference for the Use and Design of AI-Based Applications in Psychotherapy. Frontiers in Psychology, 12. https://doi.org/10.3389/fpsyg.2021.720476

This article explores the evolving role of AI-enabled therapy in psychotherapy, particularly focusing on how AI-driven technologies reshape the concept of transference in therapeutic relationships. Using Karen Barad’s framework on human–non-human relations, the authors argue that AI-human interactions in psychotherapy are more complex than simple information exchanges. As AI-based therapy tools become more widespread, it is crucial to reconsider their ethical, social, and clinical implications for both psychotherapeutic practice and AI development.


Maarten:

Humanoid Robot Intervention vs. Treatment as Usual for Loneliness in Older Adults

This study investigates the effectiveness of humanoid robots in reducing loneliness among older adults. Findings suggest that interactions with social robots can positively impact mental health by decreasing feelings of loneliness.


Citation:

Bemelmans, R., Gelderblom, G. J., Jonker, P., & De Witte, L. (2022). Humanoid robot intervention vs. treatment as usual for loneliness in older adults: A randomized controlled trial. Journal of Medical Internet Research [1]


Enhancing University Students' Mental Health under Artificial Intelligence: A Narrative Review

This review discusses how AI-based interventions can be as effective as traditional therapy in managing stress and anxiety among students, offering convenience and less stigma.


Citation:

Li, J., Wang, X., & Zhang, H. (2024). Enhancing university students' mental health under artificial intelligence: A narrative review. LIDSEN Neurobiology, 8(2), 225. [2]


Artificial Intelligence Significantly Facilitates Development in the Field of College Student Mental Health

The article explores key applications of AI in student mental health, including risk factor identification, prediction, assessment, clustering and digital health.


Citation:

Yang, T., Chen, L., & Huang, Y. (2024). Artificial intelligence significantly facilitates development in the field of college student mental health. Frontiers in Psychology, 14, 1375294. [3]


A Robotic Positive Psychology Coach to Improve College Students' Wellbeing

This study examines the use of a social robot coach to offer positive psychological interventions to students, finding significant improvements in psychological well-being and mood.


Citation:

Jeong, S., Aymerich-Franch, L., & Arias, K. (2020). A robotic positive psychology coach to improve college students' well-being. arXiv preprint arXiv:2009.03829. [4]


Potential Applications of Social Robots in Robot-Assisted Interventions

This research discusses how social robots can be integrated into interventions to alleviate symptoms of anxiety, stress and depression by increasing the ability to regulate emotions.


Citation:

Winkle, K., Caleb-Solly, P., Turton, A., & Bremner, P. (2021). Potential applications of social robots in robot-assisted interventions. International Journal of Social Robotics, 13, 123–145. [5]



Exploring the Effects of User-Agent and User-Designer Similarity in Virtual Human Design to Promote Mental Health Intentions for College Students

The study examines how the design of virtual people can affect their effectiveness in promoting conversations about mental health among students.


Citation

Liu, Y., Chen, Z., & Wu, D. (2024). Exploring the effects of user-agent and user-designer similarity in virtual human design to promote mental health intentions for college students. arXiv preprint arXiv:2405.07418. [6]


Marie:

Artificial intelligence in mental health care: a systematic review of diagnosis, monitoring and intervention applications

This article reviews 85 relevant studies in order to find information about the application of AI in mental health in the domains of diagnosis, monitoring and intervention. It presents the methods most frequently used in each domain as well as their performance.

-> citation:

Cruz-Gonzalez, P., He, A. W.-J., Lam, E. P., Ng, I. M. C., Li, M. W., Hou, R., Chan, J. N.-M., Sahni, Y., Vinas Guasch, N., Miller, T., Lau, B. W.-M., & Sánchez Vidaña, D. I. (2025). Artificial intelligence in mental health care: a systematic review of diagnosis, monitoring, and intervention applications. Psychological Medicine, 55, e18, 1–52 https://doi.org/10.1017/S0033291724003295


An Overview of Tools and Technologies for Anxiety and Depression Management Using AI

This article evaluates the utilization and effectiveness of AI applications in managing symptoms of anxiety and depression by conducting a comprehensive literature review. It identifies current AI tools, analyzes their practicality and efficacy, and assesses their potential benefits and risks.

-> citation:

Pavlopoulos, A.; Rachiotis, T.; Maglogiannis, I. An Overview of Tools and Technologies for Anxiety and Depression Management Using AI. Appl. Sci. 2024, 14, 9068. https:// doi.org/10.3390/app14199068


Harnessing AI in Anxiety Management: A Chatbot-Based Intervention for Personalized Mental Health Support

This study analyzes the effectiveness of an AI-powered chatbot, made using ChatGPT, in managing anxiety symptoms through evidence-based cognitive-behavioral therapy techniques.

-> citation:

: Manole, A.; Cârciumaru, R.; Brînzas, , R.; Manole, F. Harnessing AI in Anxiety Management: A Chatbot-Based Intervention for Personalized Mental Health Support. Information 2024, 15, 768. https:// doi.org/10.3390/info15120768


Bram:

Child and adolescent therapy, 2006

PC Kendall, C Suveg

This book is about treating mental health issues in children and adolescence. Where especially chapters 1-5 and 7 are interesting to apply to a possible ai, which assists people with mental health issues. Specifically not chapter 6 as this is a more serious matter.


Psychotherapy and Artificial Intelligence: A Proposal for Alignment

Flávio Luis de Mello, Sebastião Alves de Souza

This article is about psychotherapy in artificial intelligence. They demonstrate this by implementing a their model of an artificial intelligence in psychotherapy on web application.


Perceptions and opinions of patients about mental health chatbots: scoping review

Alaa A Abd-Alrazaq, Mohannad Alajlani, Nashva Ali, Kerstin Denecke, Bridgette M Bewick, Mowafa Househ

this article looks at chatbots in mental health and reviews what patients think about them.

Appendix

Survey






Interview guide

The interview guide includes core questions that ensure consistency across interviews, while allowing flexibility for follow-up questions based on participants' responses.

Scenario a – participant interacts individually with the chatbot

Introduction and consent

  • Briefly introduce the research project and purpose of the interview
  • Highlight that participation is voluntary, responses are anonymous, participants can stop at any time without giving a reason and that they don’t have to answer a question if they don’t want to.

Participant background

  • Can you tell me a bit about yourself? (year of study, program, etc.)
  • Do you have any experience with mental health support devices/apps/websites?
  • Have you ever used any mental health resources at TU/e?


Let the user interact with the chatbot for about 10 minutes.

Chatbot experience and usability

  • What was your first impression of the chatbot?
  • How easy or difficult was it to use?
  • Did you feel comfortable sharing personal feelings/thoughts with the chatbot? Why or why not?
  • Did you encounter any issues or frustrations while using the chatbot?

Empathy and conversational style

  • How did the chatbot’s responses feel to you – empathetic, robotic, neutral? Can you give an example?
  • Did the chatbot acknowledge emotions in a way that felt meaningful to you? If not, what was missing? If yes, in what ways did the chatbot do this?
  • How natural did the chatbot’s responses feel to you?
  • What could be improved in the way the chatbot communicates?

Functionality and user needs

  • What features did you find most useful?
  • What features were missing or could be improved?
  • The chatbot is designed to serve three main functions:
  1. Providing self-help coping strategies
  2. Referring students to professional support
  3. Supporting students on mental health waiting lists
    1. Which of these is most important to you? Why?
  • What additional functionality would make you more likely to use the chatbot?

Trust and credibility

  • Would it help if the chatbot explicitly referenced psychological theories?
    • Would you find it useful if the chatbot provided sources or explanations for its advice? Why or why not?
  • What are some red flags that would make you doubt the chatbot’s credibility?

Privacy and data security

  • Are there any privacy-related concerns that would stop you from using the chatbot entirely?

Acceptability and future use

  • Are there any barriers that would prevent you from using it?
  • Do you think an AI chatbot like this could be a valuable tool for TU/e students? Why or why not?

Closing

  • Is there anything else you’d like to share?
  • Thank you for participating!


Scenario b – participant observes a pre-made interaction with the chatbot

Introduction and consent

  • Briefly introduce the research project and purpose of the interview
  • Highlight that participation is voluntary, responses are anonymous, participants can stop at any time without giving a reason and they don’t have to answer all questions if they feel uncomfortable.

Participant background

  • Can you tell me a bit about yourself? (year of study, program, etc.)
  • Do you have any experience with mental health support devices/apps/websites?
  • Have you ever used any mental health resources at TU/e?


Show the user the example conversation and give time to read through it.

Chatbot experience and usability

  • Based on the example conversation, what were your first impressions/thoughts of the chatbot?
  • If you were the user in the conversation, do you think you would feel comfortable engaging and sharing with the chatbot? Why or why not?

Empathy and conversational style

  • Based on the example conversation, do you feel the chatbot responded in an empathetic and supportive way? Why or why not?
  • Did the chatbot acknowledge emotions affectively? If not, what could be improved? If yes in what ways?
  • How natural did the chatbot’s responses seem? Do they seem empathetic?
  • If you had been the user in this interaction, what would you have wanted the chatbot to say to do differently?

Functionality and user needs

  • Based on what you saw, which chatbot features seemed most useful?
  • Which features felt missing or could be improved?
  • The chatbot is designed to serve three main functions:
  1. Providing self-help coping strategies
  2. Referring students to professional support
  3. Supporting students on mental health waiting lists
    1. Which of these is most important to you? Why?
  • What additional functionality would make you more likely to use the chatbot?

Trust and credibility

  • Would it help if the chatbot explicitly referenced psychological theories?
  • Would you find it useful if the chatbot provided sources or explanations for its advice? Why or why not?
  • What are some red flags that would make you doubt the chatbot’s credibility?

Privacy and data security

  • Are there any privacy-related concerns that would stop you from using the chatbot entirely?

Acceptability and future use

  • Are there any barriers that would prevent you from using it?
  • Do you think an AI chatbot like this could be a valuable tool for TU/e students? Why or why not?

Closing

  • Is there anything else you’d like to share?
  • Thank you for participating!