PRE2024 3 Group17
Practical Information
Names | Student number | |
---|---|---|
Bridget Ariese | 1670115 | b.ariese@student.tue.nl |
Sophie de Swart | 2047470 | s.a.m.d.swart@student.tue.nl |
Mila van Bokhoven | 1754238 | m.m.v.bokhoven1@student.tue.nl |
Marie Bellemakers | 1739530 | m.a.a.bellemakers@student.tue.nl |
Maarten van der Loo | 1639439 | m.g.a.v.d.loo@student.tue.nl |
Bram van der Heijden | 1448137 | b.v.d.heijden1@student.tue.nl |
Logbook
Name | Week 1 | Week 2 | Week 3 | Week 4 | Total time spent (hours) |
---|---|---|---|---|---|
Bridget | lecture + group meeting+State of the art (6 art) | Group meeting + tutor meeting + two user cases | Group meeting + tutor meeting+ find participants+ ERB forms+ consent form+ ethical considerations | Finalize Intro/Problem/Lit review | 6+6+8+8 |
Sophie | Lecture + group meeting + state of the art (3/4 art) + writing the problem statement | Group meeting + tutor meeting + specify deliverables and approach +
make planning concise |
Group meeting + tutor meeting + find participants + write interview guide | Group meeting + tutor Meeting + write section about interviews | 8 + 8 + 6 + 4 |
Mila | Group meeting + State of the art (3/4 art) + Users, Milestones, Deliverables + task Division | Group/Tutor Meeting + Create, improve and send out survey | Group/Tutor meeting + find participants + analyse survey results + write continued approach | Group/Tutor Meeting + ChatGPT handbook, diagnostics, decision tree | 6 + 7 + 9 + 9 |
Marie | Group meeting + State of the art (3/4 art) + Approach | Group/tutor meeting + Examine state of the art sources and write corresponding wiki part | Group meeting + tutor meeting + find participants + fill out ERB-forms, correspond with ERB, revise ERB- and consent forms + section on chatgpt concerns | Group/Tutor Meeting + Finalize Intro/Problem/Lit review | 4 + .. + 8 |
Maarten | State of the art (6 art) | Research how to train a GPT? + start making one | Group meeting + tutor meeting | ChatGPT train from thechniques | 10H |
Bram | State of the art (3/4 art) + User Requirements | Write up problem statement | Group meeting + tutor meeting | Group/Tutor Meeting + ChatGPT techniques used and literature |
Name | Week 5 | Week 6 | Week 7 | Week 8 | Total time spent |
---|---|---|---|---|---|
Bridget | |||||
Sophie | Group/Tutor meeting + interviews + transcribing/analyzing interviews
+ comparing our model with regular ChatGPT outputs |
Group/tutor meeting + thematic analysis +
write report part interview findings |
13 + 13 | ||
Mila | Group/Tutor Meeting + updating handbook + approach | 13 | |||
Marie | |||||
Maarten | |||||
Bram |
After the tutor meeting on Monday, we’ll meet as a group to go over the tasks for the coming weeks. Based on any feedback or new insights from the meeting, we’ll divide the work in a way that makes sense for everyone. Each week, we’ll check in, see how things are going, and adjust if needed to keep things on track. This way, we make sure the workload is shared fairly and everything gets done on time.
Milestones
Week 1
- Brainstorming topics and pick one
- Communicate topic and group to course coordinator
- Start the problem statement
- Conduct literature review (5 articles pp)
- Formulate planning, deliverables and approach
- Users – user requirements
- Identify possibilities technical aspect
Week 2
- Improving and writing the problem statement, including the following:
- What is exactly what we want to research?
- How are we going to research this?
- Context and background
- Which problem are we addressing?
- Importance and significance
- Research question/research aim (problem definition)
- Current gaps or limitations
- Desired outcomes – what is the goal of this study?
- Scope and constraints – who is our target group?
- Formulating questions for the survey and setting the survey up – this survey should include questions/information that will help up us determine what we should and should not include in our chatbot, additional to the insights found in literature.
- Demographic information
- Short questions about levels/experiences of loneliness and stress
- Short questions about the current use of AI – what type of resources do students already use supporting possible mental health problems such as loneliness and/or stress?
- Sending out the survey (we aim for at least 50 participants for this survey)
- Formulating two user cases – one for loneliness and one for stress. These user cases should include:
- Clear and concise user requirements
- A description of our user target group – who are they? What do they value? What does their daily life look like?
- Problems they might have related to our topic
- Reliability?
- Ethical concerns? Privacy?
- How useful are such platforms?
- Exploring and working out the possibilities/challenges in how to develop the chatbot/AI.
- Which chatbots do already exist? How do they work? Is there any literature about the user experience of any of these platforms?
- How does it technically work to create a chatbot?
- What do we need for it? Knowledge? Applications/platforms?
- How much time will it take?
- What might be some challenges that we could encounter?
- How and in what way do we need to create the manual?
- Updating the state of the art, including:
- Which chatbots do already exist? How do they work? Is there any literature about the user experience of any of these platforms?
- Overview of psychological theories that could apply
- What research has been done relating to our topic? What of this can we use? How?
- A clear and comprehensive planning of the rest of the quartile. Including clear and concise points what to do per week and all intermediate steps.
Week 3
- Starting to find participants for the interviews
- How many participants?
- Determine inclusion and exclusion criteria
- Process and analyze survey data –
- Create a list of prioritized and pain points and user needs – cross this to literature findings
- Map user pain points to potential chatbot features
- Where do the users experience problems? How might we be able to solve these within our design?
- Writing the manual for the chatbot – this should incorporate the literature findings found in week 2 and should be separate for loneliness and stress.
- Translate theoretical insights into practical chatbot dialogues
- Research on what good language is for an AI
- ?? ERB forms and sending those to the ethical commission
- Making consent forms for the interviews
- Exploring and working out the ethical considerations
- Working on the design of the chatbot
- Defining chatbot objectives – what are the key functionalities?
- Map out conversation trees for different user scenarios
- Incorporate feedback loops
- If time and possible – start working on interview guide to have more space and time for conducting the interviews.
Week 4
- Finalizing chatbot (COMPLETE BEFORE WEDNESDAY) –
- Finalizing design and features
- Testing and debugging
- Formulating interview guide (COMPLETE BEFORE WEDNESDAY)
- Develop a semi-structured guide – balance open-ended questions to encourage deeper insights, organize questions into key themes.
- Finalizing participants –
- Ensure participants fit user category criteria
- Informed consent – provide clear explanations of the study purpose, data use and confidentiality in advance to the interview.
- Scheduling and logistics – finalize interview schedules, locations and materials
- Prepare a backup plan in case of lots of dropouts
- Write introduction – use problem statement as starting point and include it in the introduction
- Provide context, research motivation and objectives.
- Short overview approach
- Write section about survey, interview questions, chatbot manual and approach
- Survey structure, explain questions, and findings (using graphs?)
- Interview structure, explain interview guide and rationale for chosen approach
- Explain ethical considerations and study design
- Elaborate on the informed consent and ERB form
- Start conducting interviews
Week 5
- Conducting the last interviews (ideally done before Wednesday)
- Ensure a quiet, neutral space (if real-life)
- Securely record interviews (using Microsoft?)
- Start to process and analyze the interviews
- Transcription – clean version
- Thematic analysis, steps:
- Familiarization with the data – reading and rereading through transcripts to identify key patterns, note initial impressions and recurring topics.
- Coding the data – descriptive (basic concepts) and interpretative codes (meanings behind responses)
- Identifying themes and subthemes – group similar codes into broader themes
- Reviewing and refining themes – check for coherence, ensure each theme is distinct and meaningful
- Defining and naming themes – assign clear and concise names that reflect core purpose.
- Start writing findings section – organize interview insights by theme
- Present each theme with supporting participant quotes
- Discuss variations in responses (different user demographics?)
Week 6
- Completing thematic analysis – clearly highlighting what can/should be changed in the design of the chatbot.
- Cross-checking and validating themes
- Extracting key insights for chatbot improvement
- Highlight what should be changed or optimized in the chatbot design
- Categorize findings into urgent, optional and future improvements, depending on time constraints.
- Finish writing findings section
- Write results section
- Present main findings concisely with visual aids (tables, graphs, theme maps, etc.)
- Write discussion section
- Compare findings of the interviews to existing literature and to the results of the survey study
- Address unexpected insights and research limitations
- Write conclusion section
- Summarize key takeaways and propose future research possibilities
- Updating the chatbot based on insights of the interviews
Week 7
Space for potential catch-up work
- Finish and complete chatbot
- Finalize report
- Prepare presentation
- Give final presentation
- Fill in peer review
Report
Introduction
Mental health is a topic that is affecting many people all over the world. One of the groups where mental health is an increasingly important topic is in students. The amount of students in need of psychological help and support is growing, but the amount of help available is not growing along with it. Psychologists, in and out of campus facilities are overbooked and have long waiting lists. And these are still just some of the problems that students face once they have already been looking for and accepting help. Mental health, even though important and necessary, is a topic that people feel embarrassed talking about or find it hard to admit they struggle with it. Going to a professional for help is then hard, and it can be hard to even find where to start.
This project is conducted to help find a solution for some of these problems using someone almost every student has access to; the internet. Focusing on Tu/E students, where we propose creating an additional feature within ChatGPT designed to support mental health. This tool will serve as an accessible starting point for students on their mental health journey, providing a safe, non-judgmental environment to express their feelings and receive guidance. By offering immediate, empathetic responses and practical advice, this tool aims to lower the barrier to seeking help and empower students to take proactive steps toward well-being. While it is not intended to replace professional counseling directly, it can serve as an initial support system, helping students gain insights into their mental health challenges, creating a bridge for longer waiting periods and guiding them toward appropriate resources when necessary.
Approach
Our research follows a structured, multi-phase process to develop a chatbot that helps students with mental health support. We place a strong emphasis on extensive research both before and after development to ensure the chatbot is as effective and tailored to user needs as possible. By continuously refining our approach based on findings, we aim for maximum optimization and user satisfaction. We start by identifying the problem, researching the needs of students, developing the chatbot, and then testing and improving it based on feedback.
1. Foundational Phase
The research begins with a foundational phase, where we clearly define the problem, decide on our approach and deliverables, and identify our target group. This includes:
- Defining the topic: We brainstorm and decide on what topic and problem we want to focus during our project.
- Deciding on approach and deliverables: We determine our approach, milestones and deliverables.
- Identify target group: We define or target group, user requirements, and two user cases.
This phase ensures that we build something meaningful and relevant for students.
2. Research Phase
To understand how best to develop the chatbot, we conduct three types of research:
- Literature Review: Looking at existing research on AI chatbots for mental health, common challenges, and best practices.
- User Survey: A short Microsoft Forms survey, with around 50 responses, to understand students' experiences, concerns, and preferences regarding mental health problems and support.
- GPT Training Research: We look into how we can train a GPT for optimal results.
By combining academic knowledge with real user feedback, we ensure that our chatbot is both evidence-based and aligned with student needs.
3. Development Phase
Using what we learned from our research, we start developing the chatbot. This includes:
- Training the GPT Model: We train the chatbot with self-written instructions to make sure it responds in a helpful and empathetic way.
- Applying Psychological Techniques: We integrate psychological strategies so the chatbot can acknowledge emotions and guide students toward self-help or professional support.
- Iterative Testing: We create early versions of the chatbot, test them, and improve to ensure good response quality and ease of use.
This phase requires balancing AI technology with human-like emotional understanding to make the chatbot as supportive and functional as possible.
4. Evaluation Phase
To test how well the chatbot works, we conduct a detailed, three-fold evaluation:
- User Interviews: We conduct 12 interviews with TU/e students who review chatbot conversations and give feedback on usability, response accuracy, and emotional engagement.
- Comparison with an Untrained GPT: We test the same prompts on a standard GPT model as a baseline to measure improvements.
- Expert Feedback: A student psychologist and an academic advisor review the chatbot and provide recommendations for improvements.
This phase ensures we gather insights from both users and professionals to fine-tune the chatbot for the best possible experience.
5. Improvement Phase
Based on our evaluation, we make necessary improvements to the chatbot:
- Analyzing Feedback: We look for common themes in the user interviews, expert reviews, and GPT comparison to find areas that need improvement.
- Making Adjustments: We refine the chatbot’s responses, improve training data, and tweak its conversational style to improve user experience.
- Retesting: After making changes, we test the chatbot again to confirm that the improvements are effective.
This step ensures that the chatbot keeps evolving based on research and real-world feedback.
6. Finalization Phase
In the final phase, we document our findings and complete the chatbot development. This includes:
- Final Report: The report consists of our entire process, including all intermediate steps with explanation and justification, as well as theoretical background information, conclusions, discussion and implications about the findings for future research.
- Presentation: We will present our findings in a final presentation
- GPT Handbook: We will write a handbook containing all the instructions used to train the chatbot, including general guidelines, disclaimers, referrals, crisis support, and psychological strategies.
By doing thorough research at every stage and continuously optimizing the chatbot, we ensure it is well-researched, user-friendly, and effective in supporting students with mental health challenges.
Problem statement
In this project we will be researching mental health challenges, specifically focusing on stress and loneliness, and exploring how social robots and Artificial Intelligence (AI) could assist people with such problems. Mental health concerns are on the rise, with stress and loneliness being particularly prevalent in today's society. Factors such as the rapid rise of social media channels and the increasing usage of technology in our everyday life contribute to higher levels of emotional distress. Additionally, loneliness is increasing in modern society, due to both societal changes and technological advancements. The increasing usage of social media and technology is replacing real-life interaction, creating superficial interactions that don’t fulfill deep emotional needs. Second, the shift to remote working and online learning means fewer face-to-face interactions, leading to weaker social bonds.
Seeking professional help can be a difficult step to take due to stigma, accessibility issues and financial constraints. There are long waiting times for psychologists, making it difficult for individuals to access professional help. This stigma and increase of stress and loneliness is especially apparent in age groups of young adolescents and the youth, who are also particularly vulnerable to stigma. Especially with the ever increasing load of study material the education system has to teach students and childeren, study related stress is becoming a larger problem by the day.
Many students struggling with mental health challenges such as loneliness and stress, often feel that their issues aren’t ‘serious enough’ to seek professional support, even though they might be in need. But even when it is serious enough to consult a professional, patients are going to have to wait a long time before actually getting therapy, as waiting lines have been getting significantly larger over the past years. Robots as well as Artificial Intelligence (AI) technologies might be the solution to bridge this gap, by offering accessible mental support that does not come with the same stigma as therapy. The largest benefit is mainly that AI therapy can be accessed at any time, anywhere.
This paper will focus on a literature review of using social robots and Large Language Models (LLMs) in supporting students and young adults with the rather common minor mental health issues of loneliness and stress. The study will start with reviewing the stigma, needs and expectations of users in regards to artificial intelligence in mental health issues and the current state of the art and its limitations. Based upon this information, a framework for a mental health LLM will be constructed, either in the form of a GPT or in the form of a guideline mental health GPT’s should follow. Finally a user study will be conducted to analyse the effectiveness of this proposed framework.
Users
User Requirements
- Achieve personal/emotional progression on their mental health struggle.
- Get people to open up towards other people and talk about their issues to family or friends.
- Make people feel comfortable chatting/opening up towards the artificial intelligence.
- Handling user data with care for example not sharing or leaking personal data and asking for consent when collecting data.
Personas and User Cases
Our target users are students who are struggling with mental health challenges, specifically targeting loneliness and stress. The user focus is on those that either feel like their problems are not 'serious' enough to go to a therapist, have to wait a long time to see a therapist and need something to bridge the wait, or those that struggle to seek help and give them an easier alternative.
Joshua just started his second year in applied physics. Last year was stressful for him with obtaining his BSA, and now that this pressure has decreased he knows he wants to enjoy his student life more. But he doesn’t know where to start. All his classmates of the same year have formed groups and friendships, and he starts feeling lonely. Its hard for him to go out of his comfort zone and go to any association alone. His insecurities make him feel more alone. Like he doesn’t have anywhere to go, which makes him isolate himself even more, adding to the somber moods.
He knows that this is also not what he wants, and wants to find something to help him. Its hard to admit this to something, hard to put it in words. Therapy would be a big step, and would take too long to even get an appointment with a therapist. He needs a solution that doesn’t feel like a big step and is easily accessible.
Olivia, a 21 year-old Sustainable innovation student, has been very busy with her bachelor end project for the past few months, and it has often been very stressful and caused her to feel overwhelmed. She has always struggled with planning and asking for help, and this has especially been a factor for her stress during this project.
It is currently 13:00, and she has been working on her project for four hours today already, only taking a 15-minute break to quickly get some lunch. Olivia has to work tonight, so she has a bunch of tasks she wants to finish before dinner. Without really realizing it, she has been powering through her stress, working relentlessly on all kinds of things without really having a clear structure in mind, and has become quite overwhelmed.
With her busy schedule and strict budget, a therapist has been a non-explored option. Olivia has not grown up in an environment where stress and mental problems were discussed openly and respectfully, and has always struggled to ask for help with these problems. However last week, she found online help tool and used it a few times to help her calm down when things get too intense. On the screen, is an online AI therapist. This therAIpist made it easier for Olivia to accept that she needed help and look for it. She has found it to become increasingly easy for her to formulate her problems, and the additional stress of talking to someone about her problems have decreased. Olivia receives a way to talk about her problems and get advice.
When she is done explaining her problems to the AI tool, it applauds Olivia for taking care of herself, and asks her if she could use additional help of talking to a human therapist. Olivia realizes this would really help her, and decides to take further action in looking for help in an institution. In the time she would wait for an appointment she can make further use of the AI tool in situations where help is needed quickly and discreetly.
State-of-the-art
There are already examples of mental health chatbots and research has been done on these chatbots as well as on other AI-driven therapeutic technologies.
Woebot and Wysa are two existing AI chatbots that are designed to give mental health support by using therapeutic approaches like Cognitive Behavioral Therapy (CBT), mindfulness, and Dialectical Behavioral Therapy (DBT). These chatbots are available 24/7 and let users undergo self-guided therapy sessions.
Woebot invites users to monitor and manage their mood using tools such as mood tracking, progress reflection, gratitude journaling, and mindfulness practice. Woebot starts a conversation by asking the user how they’re feeling and, based on what the user shares, Woebot suggests tools and content to help them identify and manage their thoughts and emotions and offers techniques they can try to feel better.
( https://woebothealth.com/referral/ )
Wysa employs CBT, mindfulness techniques, and DBT strategies to help users navigate stress, anxiety, and depression. It has received positive feedback for fostering a trusting environment and providing real-time emotional support (Eltahawy et al., 2023).
Current research indicates chatbots like these can help reduce symptoms of anxiety and depression, but are not yet proven to be more effective than traditional methods like journaling, or as effective as human-led therapy (Eltahawy et al., 2023).
The promising side of AI therapy in general is underscored in articles such as 'Human-Human vs. Human-AI Therapy: An Empirical Study' and 'Enhancing University Students' Mental Health under Artificial Intelligence: Principles of Behaviour Therapy', where the first of these highlights the level of professionality found in AI-driven therapy conversations, and the second indicates the potential help that could be offered (and how) to university students specifically.
Other articles investigate AI-driven therapy in more physical forms. The article "Potential Applications of Social Robots in Robot-Assisted Interventions for Social Anxiety" for example, shows how social robots can act as coaches or therapy assistants and help users engage in social situations in a controlled environment. The study "Humanoid Robot Intervention vs. Treatment as Usual for Loneliness in Long-Term Care Homes" examines the effectiveness of humanoid robots in reducing loneliness among people in care homes, and found that the AI-driven robot helped reduce feelings of isolation and helped improve the users' mood.
Challenges
Several challenges for AI therapy chatbots are often mentioned in current research on the subject.
The first of these is that AI chatbots lack the emotional intelligence of humans; AI can simulate empathy but does not have the true emotional depth that humans have, which may make AI chatbots less effective when it comes to handling complex emotions (Kuhail et al., 2024).
A second often-mentioned challenge is that the use of AI in mental healthcare raises concerns regarding privacy (user data security) and ethics (Li et al., 2024).
Another challenge is that there is a risk at hand of users becoming over-reliant on AI chatbots instead of seeking out human help when needed (Eltahawy et al., 2023).
Lastly, another difficulty is the limited adaptability of AI chatbots; they cannot quite offer fully personalized therapy like a human therapist can.
Research
Initial Survey
Purpose and Methodology
To gain a deeper understanding of student attitudes toward AI mental health support, we conducted a survey focusing on stress and loneliness among students. The objective was to explore how students currently manage these challenges, their willingness to use AI-based support, and the key barriers that might prevent them from engaging with such tools.
The survey specifically targeted students who experience mental health struggles but do not perceive their issues as severe enough to seek professional therapy, those facing long waiting times for professional support, and individuals who find it difficult to ask for help. By gathering insights from this demographic, we aimed to identify pain points, assess trust levels in AI-driven psychological tools, and determine how a chatbot could be designed to effectively address students’ needs while complementing existing mental health services.
Results
The survey was completed by 50 respondents, of whom 40% were female, with nearly all participants falling within the 18-23 age range. The responses provided valuable insights into the prevalence of stress and loneliness among students, as well as their attitudes toward AI-driven mental health support.
Stress and Loneliness
The results indicate that stress is a more frequent issue than loneliness among students. While 36% of respondents reported feeling lonely sometimes and 8% often, stress levels were significantly higher, with 28% sometimes feeling stressed, 54% often experiencing stress, and 2% reporting that they always feel stressed.
When asked about the primary causes of stress, students most frequently cited:
- Exams and deadlines
- Balancing university with other responsibilities
- High academic workload
For loneliness, the key contributing factors included:
- Spending excessive time studying
- Feeling disconnected from classmates or university life
To cope with these feelings, students employed various strategies. The most common methods included exercising, reaching out to friends and family, and engaging in entertainment activities such as watching movies, gaming, or reading.
Trust and Willingness to use AI Chatbot
One of the most striking findings from the survey is the low level of trust in AI for mental health support. When asked to rate their trust in an AI chatbot’s ability to provide reliable psychological advice on a scale of 0 to 10, the average trust score was 3.88, with a median of 4. This suggests that, while some students recognize potential benefits, a significant portion remains skeptical about whether AI can truly understand and assist with personal struggles.
In terms of willingness to engage with an AI chatbot, the responses were mixed:
- 24 students (49%) stated they would not use an AI chatbot
- 16 students (33%) were unsure, selecting “Maybe”
- 7 students (14%) said they would only consider using it if human help was unavailable (e.g., due to long waiting times)
- Only 2 students (4%) expressed strong enthusiasm for the idea
Although a considerable number of respondents remained resistant, nearly half of the students expressed some level of openness to using an AI tool under the right conditions.
Concerns and Appeals
The survey revealed several key concerns that deter students from using an AI chatbot for mental health support. The most frequently mentioned barriers were:
- A strong preference for human interaction – 30 respondents stated they would rather talk to a human than an AI.
- Distrust in AI’s ability to provide meaningful support – 20 students were skeptical about AI’s capability in handling sensitive mental health conversations, fearing the responses would be impersonal or inadequate.
- Doubt that AI can truly understand emotions – 15 respondents felt that AI lacks the emotional depth needed for meaningful interaction.
- Uncertainty about AI’s effectiveness – 15 respondents questioned whether AI could actually provide real help for mental health concerns.
Despite these concerns, students identified several features that could make an AI chatbot more attractive for mental health support:
- Anonymity – 20 students highlighted the importance of privacy, indicating they would be more willing to use a chatbot if they could remain anonymous.
- Evidence-based advice – 17 respondents expressed interest in a chatbot that provides guidance based on scientifically validated psychological techniques.
- 24/7 availability – 14 students valued the ability to access support at any time, particularly in moments of distress.
The Role of Universities in Mental Health Support
A noteworthy finding from the survey is that more than 40% of respondents had either sought professional help for stress or loneliness or had wanted to but did not actively pursue it. This suggests that many students recognize their struggles but face barriers in seeking support.
Furthermore, when asked whether universities should provide more accessible mental health support for students, responses indicated significant demand for such initiatives:
- 60% of respondents agreed that more accessible support should be available.
- 32% were unsure.
- Only 8% felt that additional mental health support was unnecessary.
These findings highlight the need for universities to explore alternative mental health support options, including AI-based tools, to address gaps in accessibility and availability.
Discussion and Implications for AI Chatbot Design
The survey results underscore the challenges and opportunities in designing an AI chatbot for mental health support. The most pressing issue is the low trust in AI-generated psychological advice. Many students remain skeptical of AI’s ability to provide meaningful guidance, and the chatbot must actively work to establish credibility. One way to address this is by ensuring that all responses are based on scientifically validated psychological techniques. By referencing established methods such as Cognitive Behavioral Therapy (CBT) and mindfulness-based strategies, the chatbot can reinforce that its recommendations are grounded in evidence rather than generic advice. Including explanations or citations for psychological principles could further increase trust.
Another critical aspect is ensuring that the chatbot’s tone and conversational style feel natural and empathetic. The most common concern among respondents was the preference for human interaction, meaning the chatbot must be designed to acknowledge users’ emotions and offer responses that feel supportive rather than robotic. While AI cannot replace human therapists, it can be trained to respond with warmth and understanding, using conversational techniques that mimic human empathy. A key design feature should be adaptive responses based on sentiment analysis, allowing the chatbot to adjust its tone depending on the user’s emotional state.
Given that privacy concerns were a recurring theme, transparency in data handling will be essential. Before engaging with the chatbot, users should be explicitly informed that their conversations are anonymous and that no identifiable data will be stored. This reassurance could help mitigate fears surrounding data security and encourage more students to engage with the tool.
The survey also highlights the need for different chatbot functionalities to cater to varying student needs. Some students primarily need self-help strategies to manage stress and loneliness independently, while others require a referral system to guide them toward professional help. Another group of students, particularly those on mental health waiting lists, need interim support until they can see a therapist. To address these different needs, the chatbot should be designed with three core functions:
- Providing psychological support and coping strategies
- The chatbot will offer evidence-based techniques for managing stress and loneliness.
- It will emphasize anonymity and create a non-judgmental space for users to express their concerns.
- Referring students to professional help and university support services
- Users who prefer human interaction will be directed to mental health professionals at TU/e.
- The chatbot will provide information on how to access university support resources.
- Supporting students on waiting lists for professional help
- While students wait for therapy, TU/e's mental health professionals can refer to the chatbot, which will temporary offer guidance to help them cope in the meantime.
- The tool will clarify that it is not a substitute for therapy but can provide immediate relief strategies.
To ensure the chatbot meets these objectives, further prototype testing will be necessary. A small-scale user trial will be conducted to gather qualitative feedback on conversational flow, response accuracy, and overall effectiveness. Additionally, the chatbot’s ability to detect and adapt to different emotional states will also be evaluated to refine its responsiveness.
The findings from this survey highlight both the limitations and possibilities of AI-driven mental health support. While trust remains a significant barrier, the potential for accessible, anonymous, and always-available support should not be underestimated. By designing a chatbot that prioritizes credibility, privacy, and adaptability, we can create a tool that helps students manage stress and loneliness while complementing existing mental health services. As we move forward, user feedback and iterative development will be crucial in shaping a system that students find genuinely useful.
Product Development: Our own GPT
Typically, developing a chatbot requires extensive training on data sets, refining models and implementing natural language processing (NLP) techniques. This process includes a vast amount of data collection, training and updating of the natural language understanding (NLU).
However, with OpenAI's “Create Your Own GPT,” much of this technical work is abstracted away. Instead of training a model from scratch, this tool allows users to customize an already trained GPT-4 model through instructions, behavioral settings and uploaded knowledge bases. So without the need for coding or AI expertise it enables users to create a tailored AI assistant like for example our mental health chatbot.
The things that needs to be done for our gpt training:
- Behavior and design: Knowing how to guide conversations effectively, ensuring responses are matching with the desires of our users (emphathetic, ethical, engaging)
- User centric thinking: Defining the needs of students seeking mental health support and structuring conversations accordingly.
- Prompt engineering: Determining how the gpt needs to respond (so less solution but more personal focused with asking questions).
- Testing.
Psychological background of the GPT
In order to make an artificial intelligence be able to help and assist young adolescents with mental help, it needs a psychological framework. This psychological framework can be divided into a few different functions.
- Confidentiality, the chatbot should refrain from exposing personal data to others without the user having given his implicit consent.
- Understanding and inquiring, the chatbot should be able to understand the problems the user is describing and be able to inquire further on what is troubling the user.
- Concluding, after the chatbot has gathered enough information on the problems the user has, it should be able to conclude where the problem lies.
- Assessing, when the chatbot comes to a severe mental health problem, it should be able to reference help at uni or advance people in waiting lines, to help people get in touch with an actual professional faster.
- Referencing, when a solid conclusion of the problem has been made, the chatbot should reference the appropriate (real) person to get in touch with at the university.
- Generate advice, if the user does not want to get in touch with a real person it should generate advice based upon the problem and psychological theories the chatbot has been supplied with.
This pattern requires a lot of psychological background. All steps require at least an understanding of the human brain. When looking at psychological theories, there are a few options that should be analysed.
Diagnostics
To ensure that the GPT effectively supports students seeking mental health assistance, we designed a structured diagnostics process that allows it to determine the user's needs and provide appropriate guidance. The process consists of two key diagnostic questions:
- What does the user want from the GPT? The chatbot identifies whether the user is looking for psychological support, a referral to professional help, or support while on a waitlist.
- What is the user's specific issue? If the user is seeking psychological support or waitlist support, the GPT determines whether their primary concern is stress, loneliness, both, or an unspecified issue. If the user is requesting a referral, the GPT clarifies what type of support they need, such as mental health services, safety assistance, academic support, or social connection.
How the GPT Uses the Diagnostics Process
The chatbot engages in a natural, non-intrusive conversation to assess the user's situation. It asks guiding questions to classify their needs and ensures that it provides the most relevant and effective response.
- If a referral is needed, the GPT directs the user to the correct TU/e service, offering additional resources where applicable.
- If the user needs psychological support, the GPT provides evidence-based coping strategies tailored to their concerns.
- If the user is on a waitlist for professional care, the GPT offers self-help techniques while checking whether they need an alternative referral.
- If a crisis situation is detected, the GPT immediately refers the user to 113 Zelfmoordpreventie and encourages them to seek professional human help.
Techniques
Cognitive-Behavioral Therapy (CBT)
CBT mainly focuses on negative thinking patterns, therefore people suffering from mental health issues can learn new ways of coping with those thinking patterns to improve their mental health. Some treatment follwing CBT can include learning the patient to recognise their own negative thinking patterns, encouraging the patient to understand behaviour and motivation of others or develop a larger confidence in the patients own abilities.
Advantages of using CBT in a chatbot:
- Firstly, this is a rather analytical/general manner of giving therapy, for instance recognizing negative thinking patterns by analyzing the general word usage use given to the chatbot.
- Secondly CBT primoraly needs the patient to make a consistent effort in recognising their own thinking patterns and coping with those themselves.
- Thirdly, CBT gives rather short-term general mental health assistance, which means the chatbot does not need to save long term data, which decreases the possibility of leaking sensitive data, and it allows the chatbot to give immediate advice after a conversation with it.
Disadvantages of using CBT in a chatbot:
- Firstly, CBT requires the student to make a consistent effort in daring himself to keep analysing and assist himself with changing his thought patterns.
- Secondly, CBT lacks in tackeling more severe/deeper mental health issues, however these kinds of situations should not be solved by the chatbot directly, but rather be referenced to the professional help which the university also provides.
When looking at CBT in chatbots, it can be concluded that since CBT is very structured and relatively safe to use on patients, a chatbot should make use of CBT to help patients.
Acceptance and Commitment Therapy (ACT)
ACT is rather similar to CBT, however where CBT focusses more on changing the patients thought pattern, ACT focusses on helping the patient accept their emotional state, and move forward from there. Treatments using ACT can include encouraging the patient to not dwell on past mistakes but rather live in the moment and encourage the patient to take steps towards his goals despite possible discomfort.
Advantages of using ACT in a chatbot:
- Firstly, ACT helps patients accept their feelings, therefore it is very general therapy and helps with a lot of different mental health issues.
- Secondly, since ACT can be used for a lot of different mental health issues, this provides a lot of long term help for the patient.
Disadvantages of using ACT in a chatbot:
- Firstly, ACT is not as structured as CBT and requires a lot of conversation to understand the patients emotions and thoughts, furthermore the chatbot would need a lot more conversation to help the patient come to terms with how they are feeling and accept it.
- Secondly, accepting emotions can be very difficult for people seeking relief and in worse cases might push people to resigning to suffer in their bad mental health state.
- Thirdly, ACT focusses on accepting and not on symptom reduction. The patient will therefore not see any results on the short-term.
When looking at ACT in chatbots, it can be concluded that chatbots in therapy should generally not use ACT, due to its complexity and danger to push a student towards the wrong path. There are some parts a chatbot could use, for example when regarding stress for exams a chatbot could use ACT to help a patient come to terms with how normal exam stress is and come to terms with it. This will still be more complex for a chatbot to learn though.
Positive Psychology Therapy (PPT)
PPT is an approach to therapy that focuses on increasing well-being and general happiness rather than treating mental illness. This is done by focussing on building positive emotions, engagement, relationships, meaning and achievements. Treatments using PPT can include encouraging the patient to write down what he or she is grateful for daily, encouraging the patient to engage in acts of kindness to boost their happiness or to visualize the patients ideal self to increase motivation.
Advantages of using PPT in chatbots:
- Firstly, ACT boosts the patients well-being and happiness, helping patients in the short term.
- Secondly, ACT can also be very helpful to people that do not struggle with mental health issues, since ACT also helps a lot with personal growth.
- Thirdly, PPT can help create a long term resillience to mental health issues, by teaching coping strategies and increasing a patients image of self.
Disadvantages of using PPT in chatbots:
- Firstly, PPT does not adress deep psychological issues, but as mentioned in CBT, this is not an issue as when the chatbot encounters deep psychological issues it should refer to a human professional rather than advice himself.
- Secondly, PPT is less structured than regular therapy, thus the chatbot will be required to make more conversation in order to help a patient with their self image.
- Thirdly, PPT requires the patient to consistantly make an effort in helping himself.
In conclusion PPT can be a very useful tool for chatbots, since it is very safe to apply and helps a lot with personal growth, which is especially useful for students or youth. PPT can require more conversation, which can be harder for a chatbot, however using general PPT strategies to increase the patients image of self, can be very generally used. Therefore, it is advised to include PPT in a chatbot.
Mindfulness-Based Stress Reduction (MBSR)
MBSR therapy is a therapeutic approach that uses mindfulness meditation to help with mental or physical health. MBSR teaches patients to be more aware of their thoughts, emotions and senses, where the goal is to increase awareness of this moment. Treatments using MBSR can include meditation, with some different focuses, either on thoughts and emotions or mindfulness. This can conclude in decreasing stress and anxiety and improved focus and concentration.
Advantages of using MBSR in chatbots:
- Firstly, MBSR works very well with reducing stress, anxiety and depression.
- Secondly, MBSR encourages the patient to have a better relationship with his or hers emotions, teaching the patient to accept them rather than supress or avoid them.
- Thirdly, MBSR can significantly help with sleeping, especially when having stress, which has all sorts of benefits, such as improved concentration.
- Fourthly, MBSR teaches the patient exercises to decrease stress by themselves.
Disadvantages of MBSR in chatbots:
- Firstly, MBSR requires a lot of commitment on the patients part to participate in these meditation exercises and constantly do them.
- Secondly, MBSR does not give a lot of short term benefits, as dealing with mental health issues through meditation takes practice to use effectively.
- Thirdly, MBSR does not work by itself on severe mental health issues, though it will help, MBSR will not resolve the issue by itself.
In conclusion using MBSR in a chatbot is recommended. This is a relatively safe to use therapy theory, which is focussed on long term improvements on stress, anxiety, depression and concentration, which is very applicable to the target group of this papers chatbot.
Our current GPT:
https://chatgpt.com/g/g-67b84edc9194819182a10a0dff7371c5-your-mental-health-chat-partner
Research ethics / concerns
Privacy:
Privacy is one of the concerns that comes along with chatbots in general, but with personal information like in therapy this concern grows, as already discussed in the analysis of the survey. Users may be hesitant to use the therapy chatbots because they are scared of data breaches, or the data simply being used for anything other than its purpose, like being sold for marketing reasons.
Any licensed therapist will have to sign and adhere to a confidentiality agreement, which states that the therapist will not share the vulnerable information of the patients anywhere but appropriate. For AI this is more difficult. Data will somehow have to be saved and collected for the chatbot to become smarter and learn more about the patient.
Privacy contains multiple concerns, including identity disclosure. Where the most important thing is that any of the collected data should not in any way be traceable to the patient. It corresponds to the notion of anonymity.
There are also concerns of attribute disclosure and membership disclosure, going beyond anonymity. Where if sensitive information would be available to or found by others, this can be linked to the patients, even if anonymous, and the data can be further used to make assumptions about these patients.
Because the chatbot for this project is made by creating and training a GPT, privacy concerns arise: by using the chatbot, data on personal topics and experiences is fed into chatGPT. While building a chatbot ground-up to fully avoid these concerns is unfortunately out of scope for this course, actions can be taken to mitigate the privacy concerns. One such measure is to ensure private data such as names and adresses are both not fed into the tool (by warning users) and not asked for by the tool (by training the GPT). Another, which will be done to protect research participants, is to ensure testers of the chatbot do so on an account provided by the research team, and not on their personal account.
Deception:
The advancement of sophisticated language models has resulted in chatbots that are not only functional but also remarkably persuasive. This creates a situation where the distinction between human and machine blurs, and users can easily forget they are interacting with an algorithm. This is where the core of the deception problem lies.
The discussion surrounding deception in technology has been going on for a while and has many dimensions. What precisely constitutes deception? Is it simply concealing the true nature of an entity, or is there more to it? Is intent relevant? If a chatbot is designed to assist but inadvertently misleads, does that still qualify as deception? Is deception always negative? In certain contexts, such as specific forms of entertainment, a degree of deception might be considered acceptable.
The risk of deception is particularly pronounced among vulnerable user groups, such as the elderly, children, or in the case of the therapy chatbot, individuals with mental health conditions. These groups may be less critical and more susceptible to the persuasive language of chatbots
The use of chatbots in therapy shows a scenario where deception can have consequences. While developers' intentions are positive crucial considerations must be remembered, like the risk of false empathy: Chatbots can simulate empathetic responses, but they lack genuine understanding of human emotions. This can foster a false sense of security and trust in patients. The danger of over-reliance, vulnerable users may become overly dependent on a chatbot for emotional support, potentially leading to isolation from human interaction. The potential for misdiagnosis or incorrect advice, even with the best intentions, chatbots can provide inaccurate diagnoses or inappropriate advice, with serious implications for patient health.
To mitigate these risks, it's essential that users are consistently reminded of their interaction with an algorithm. This can be done by clear identification, regular reminders and education.
Trust:
‘One significant finding is that trust in a chatbot arises cognitively while trusting a human agent is affect-based.’ Can we trust a chatbot like a physician? A qualitative study on understanding the emergence of trust toward diagnostic chatbots - ScienceDirect
Trust in systems has to be taught, learned and thought about. It doesn’t just happen like with interaction of humans.
Even before the initial contact, trust in the systems plays an important role. In research and in our own survey we have found attitudes toward chatbots. Subjective norms, perceived risk, and beliefs about the system's structural environment may influence the willingness to use the systems and the subsequent trust-building process. Trust is more likely to arise when using the chat tool is socially legitimized, and users believe using such a system is embedded in a safe and well-structured environment.
People tend to trust other people. What is thus important for trust in the system is legitimation, not just socially, but also from professionals. Having the system being recommended by professionals increases the likelihood of trusting that the system is okay to use.
Responsibility and liability:
Another concern that arises in AI therapy is the responsibility and liability. If the patient using the chatbot does something that is morally or lawfully wrong, but is suggested by the chatbot, who would be in the wrong?
A human that has full agency over its actions would be responsible for these own actions. However, they have put their trust into this online tool, and this tool could potentially give them ‘wrong’ information. Depending on whether or not you regard the chattool to be its own agent, it is not responsible either. Since its choices and sayings are programmed for its use.Therefore, the responsibility also falls to the developers of the chattool.
A significant ethical concern in AI therapy centers on the distribution of responsibility and liability. When a patient, guided by an AI chatbot, engages in actions that are morally or legally questionable, the question of who is responsible arises. While individuals are typically accountable for their choices, the reliance on AI-generated advice complicates matters. It matters whether or not an AI chatbot can be considered an agent; given that AI operates on programmed algorithms and data, it lacks genuine autonomy, making it impossible to hold it morally responsible in the same way as a human. Consequently, a substantial portion of the responsibility shifts to the developers, who are accountable for the AI's programming, the data it's trained on, and the potential repercussions of its advice. Furthermore, when therapists incorporate AI as a tool, they also bear responsibilities, including understanding the AI's limitations and applying their professional judgment. The legal framework surrounding AI liability is still developing, necessitating clear guidelines and regulations to safeguard both patients and developers. In essence, AI therapy introduces a complex web of responsibility, where patient accountability is nuanced by the developers' and therapists' roles, and the legal system strives to keep pace with rapid technological advancements.
Empathy and professional boundaries
While chatbots can mimic empathetic responses, they lack the capacity for genuine human empathy. This distinction is crucial in therapy, where authentic emotional connection forms the foundation of healing. The human ability to create deep, empathetic bonds, to truly understand and share in another's emotional experience, remains beyond the reach of AI systems.
The difference lies in the nature of response, AI operates on learned patterns and algorithms, while a human therapist, even within ethical and professional guidelines, uses intuition, lived experience, and an understanding of human complexity. Though AI can process and apply moral frameworks, it cannot navigate the moral dilemmas faced by clients with the same level of judgment as a human.
Furthermore, human therapists can adapt in a way that AI cannot replicate. They can adjust their approach, responding shifts in a client's emotional state, and engage in spontaneous, intuitive interactions. Moreover, human therapists communicate through non-verbal cues like body language, facial expressions, and tone shifts, showing vulnerability and further strengthening trust in a way that a screen simply cannot. These human qualities are essential for creating a safe and supportive therapeutic environment, and they represent the value of human connection in the realm of mental health.
Product Evaluation
User interviews
Interview guide
Conducting interviews was a vital aspect of our research as they facilitate the collection of qualitative data, which was is important for understanding subjective experiences of users. Collecting qualitative data about user experiences and thoughts would capture in-depth insights. Presenting participants with the first prototype of our created chatbot allowed them to engage with it firsthand. This allowed us to gain feedback on usability, credibility and trustworthiness. Furthermore, the interviews provided an opportunity to gather broader user opinions on the chatbot’s interface, tone and structure. The interviews were conducted to achieve two primary objectives. First of all, exploring the subjective experiences of users with the chatbot. Second, assessing user’s subjective opinions about the chatbot’s interface, conversational tone and overall structure of the chatbot.
The interview guide (can be found in the appendix) was developed to provide the interviewers with structured yet flexible questions to conduct the interviews. Since multiple interviewers were involved in the process of conducting the interviews, it was crucial to create somewhat structured questions to ensure consistency throughout the collected data. That’s why we opted for a semi-structured guide. While the guide provided structured questions, it also allowed room for participants to elaborate on their thoughts and experiences. The intention was to keep the interviews as natural and open-ended as possible, hence almost all questions were phrased open-ended.
The interview questions were developed based on key themes that emerged from the analysis of the initial survey. The following topics formed the core structure of the interviews:
- Chatbot experience and usability
- Empathy and conversational style
- Functionality and user needs
- Trust and credibility
- Privacy and data security
- Acceptability and future use
The first topic was started after the user had interacted with the chatbot/read the manuscript and covered questions about the first impressions of the chatbot. Participants were asked what they found easy and difficult, if they felt comfortable sharing personal feelings and thoughts with the chatbot and whether they encountered any issues or frustration while using the chatbot/reading the manuscript. The second topic revolved around empathy and the chatbot’s conversational style. We asked participants if they felt the chatbot’s responses were empathetic, how the chatbot acknowledge their emotions and if this was done in an effective way, how natural the chatbot’s responses seemed and if they would change anything to the interaction if possible. Next, participants were asked about the chatbot’s main features, what features might be missing or how they could be improved, which one of the three functions they found most useful and if they would add any additional functionalities. Participants were then asked about trust and credibility levels, including privacy and data security concerns. The last topic revolved around acceptability and future use, we asked about potential barriers that could prevent using the chatbot and opinions about if it would be a valuable tool for TU/e students.
All interviews followed the same structured format to ensure consistency across all sessions. First an introduction was given in which the research project was briefly introduced, the purpose of the interview was explained and it, was once again, highlighted to participants that their participation is completely voluntary. Basic demographic questions were asked to understand participation context. After, participants were given instructions on how to interact with the chatbot/read the manuscript. After interacting with the chatbot/reading manuscript, the core topics were discussed in a flexible, conversational manner, allowing participants to share their thoughts freely. At the end of the interview, participants were asked if they had anything additional insights and were then thanked for their time and contribution.
Each question in the interview guide was carefully designed to align with the overarching research question. Each main topic was tried to give equal amount of time and depth. By structuring the questions in this manner, we ensured that the collected data directly contributed to answer our research objectives.
Measures taken to maintain ethical integrity included:
- Informed consent – participants were provided with consent forms before the interview, outlining the purpose of the study, their rights and privacy information. By providing the participants with the informed consent in advance of the interview, we allowed them to read it in their own time, on their own pace.
- Anonymity – all responses were anonymized to ensure confidentiality. After the research project, all participant data will be removed from personal devices.
- Participants were reminded before and during the interview that they could withdraw from the study at any point without any repercussions.
Conducting the interviews
We employed convenience sampling due to ease of access and feasibility within the study’s time frame. While this approach allowed us to gather diverse perspectives quickly, it may limit the diversity of our findings.
See the demographics below:
Age | Gender | Education |
21 | Male | Biomedical technologies |
19 | Female | Psychology and Technology |
22 | Female | Bachelor biomedical engineering |
21 | Female | (Double) Master biomedical engineering and artificial intelligence engineering systems |
21 | Male | Psychology and technology |
21 | Female | Data science |
22 | Male | Biomedical technologies |
25 | Male | (Master) applied mathematics |
22 | Female | (Master) Architecture |
20 | Male | Computer Science & Engineering |
18 | Male | Electrical engineering |
20 | Female | Industrial Design |
Each interviewer sampled two participants and conducted two interviews, resulting in a total of 12 interviews. The interviews were either conducted online or in-person. All interviews were recorded for research purposes and recordings were deleted after transcription was finished.
Interview Analysis
Approach
After transcribing the interviews, we used thematic analysis to analyze the interview data. This qualitative method allows to identify, analyze and interpret patterns within qualitative data. Thematic analysis is especially well-suited for this study as it enables to capture users’ experiences, perceptions and opinions about the chatbot in a structured yet flexible way.
Our analysis will follow Braun & Clarke’s (2006) six-step framework for thematic analysis. The first step involves familiarizing yourself with the data. Specifically, this means that all interviews will be transcribed and all transcripts will be read multiple times in order to become familiar with the content. After the data will be systematically coded by identifying significant phrases and patterns. Coded will be assigned to relevant portions of text related to the main topics covered during the interviews. The codes will then be reviewed and grouped into potential themes based on recurring patterns. To do this, relationships between different codes will be examined to determine broader themes. Themes will then be refined, defined and named. Each theme will be clearly defined in order to highlight its significance in relation to the research question. Subthemes may also be identified to provide additional depth. Finally, the findings will be contextualized within the research objectives.
Thematic Analysis
Through thematic analysis, we identified five recurring themes. Overall, our interviews revealed that not a lot of participants had prior experience with using online mental health resources such as websites, apps or forums. Additionally, participants didn’t have any experience within the TU/e resource system.
The first theme is functionality, which covers current useful features as well as areas for improvement. Most participants were positive about all three functionalities, appreciating its structured response and ease of use. However, they noted that conversations did feel ‘robotic’ and participants highlighted that more follow-up questions would deepen the conversation and increase engagement. Asking more follow-up questions would create a more dynamic and engaging dialogue. Additionally, recalling previous sessions could make conversations feel more continuous and supportive. Another improvement area identified was the chatbot’s reliance on bullet-point responses, which contributed to its impersonal nature. Reducing the reliance on bullet-point responses could create a more human-like conversation.
Perceived effectiveness is the second theme that emerged from the interviews which focused on how well the chatbot met users’ needs. Participants generally found the AI useful for practical support, such as exam preparation and stress management strategies. Many were pleasantly surprised by the chatbot’s ability to generate detailed, actionable plans with step-by-tsep instructions that are easy to follow. However, when it came to providing deeper emotional support, participants found the chatbot lacking. While it could acknowledge emotions, it did not fully engage in empathetic dialogue. As a result, participants viewed the chatbot as a helpful supplementary tool but not as a substitute for human mental health professionals. Its ease of use was appreciated, but its lack of human warmth remains a drawback.
The next theme revolves around emotional support. While some participants found the chatbot supportive, others thought it felt robotic. The chatbot was able to recognize and validate users’ emotions, but its engagement with deeper emotional issues was limited. While the chatbot generally used natural and comforting language, some participants noticed inconsistencies in its tone, which reduced its perceived warmth and overall effectiveness. These inconsistencies made interactions feel less genuine, highlighting the need for improved emotional intelligence and conversational flow. Despite these limitations, the chatbot was still perceived as a supportive tool, showing support through listening and responding in an empathetic way.
The fourth theme identified was trust and credibility of responses. Participants emphasized the need for greater transparency in how the chatbot generated responses. Many expressed a desire for the chatbot to reference psychological theories or scientific research to enhance its credibility. The balance between scientific accuracy and natural conversation is seen as crucial, while explanations could strengthen trust, they should be communicated in an accessible and engaging manner. Some participants also questioned whether the chatbot’s advice was evidence based, underlining the need for clearer sourcing of information.
The final theme covers ethical concerns and data security, which emerged as significant issues. Participants expressed strong concerns regarding data privacy, particularly the potential for their information being stored or shared. These concerns was a barrier to engagement, as users were hesitant to share deeper emotional thoughts due to fears of insufficient anonymity. The reliance on third-party platforms, such as OpenAI, increased these concerns. Participants suggested that minimizing dependence on external AI providers and ensuring that robust data protection measures could alleviate some of these concerns and increase trust in the chatbot.
In summary, our findings suggest that while the chatbot serves as a valuable and accessible tool for mental health support, it remains an interim solution rather than a replacement for professional help. Participants appreciated its practical assistance but found its emotional engagement limited. Improvements in converstional flow, personalization and response transparency could improve the chatbot’s effectiveness. Additionally, addressing privacy concerns through stronger data security measures would further increase user trust. With these improvements, the chatbot could become a more reliable and empathetic support system for students seeking mental health assistance.
Differences our model vs. ChatGPT
This section compares the outputs from the regular ChatGPT to our adjusted model. We tested this by inputting the same user scripts into the standard ChatGPT.
One of the main advantages of our developed model over the regular model is that we trained it specifically for TU/e students. This means that it is able to directly give suggestions for professional help within the university, which is easily accessible and reachable to students. While the original ChatGPT would be able to provide users with specific referrals to professionals within the university, this would require a lot more specific prompting and questioning from the user. This is not ideal, especially since a lot of students are not aware of all the resources the university has to offer. When the chatbot suggests trying to get help within the university, students will be made aware of this resource network. Another key advantage of our model is its ability to provide more effective suggestions for stress-related issues. The suggestions, based on researched and proven techniques, the improved model gave were more relevant to the user’s problem, less obvious (increasing the change they hadn’t tried them yet), and better explained in terms of their effectiveness. These improvements resulted from additional training we did and the integration of research-backed therapies and techniques. Lastly, since our model was designed specifically for stress and loneliness related problems, the model was quicker in determining the exact problem of the user since the model was able to assume certain things before the conversation even started. Additionally, the model was able to give more suggestions specifically related to these problems.
However, both models show the same amount of follow-up questions, meaning that they both need the same amount of information before coming with a possible solution, advice or follow-up question. Even though we tried to train the model to ask even more questions before coming with an advice, this remains an area for improvement. This is due to the nature of ChatGPT, as it always directly wants to start answering questions instead of first listening like a human being would do.
Expert interviews
TU/e Academic advisor
We conducted an in-depth interview with student advisor Monique Jansen. We presented our four sample chatbot interactions. Monique provided valuable, experience based insights that helped us better align our concept with the real needs and context of students at the TU/e.
Referral is key
When asked which of the three chatbot functions she thought was most important, Monique emphasized the referral function. She noted that TU/e offers a wealth of mental health and wellness resources, but they are often underutilized due to poor visibility.
"I think TU/e offers a lot more than people realize.. making it accessible is the hard part."
Monique recognized the potential of such a chatbot to add value to TU/e's online ecosystem. However, she cautioned that implementation is key. The tool must function reliably and reducing friction for struggling students.
"If you can make the existing offerings clearer, it would already be a big win."
Discoverability and integration
A major concern was the findability of the chatbot. Monique emphasized that building a tool is one thing, but making sure students can find and use it effectively is just as important.
“Where would students go to find this chatbot? That's your first step.”
Tone and personalization
Although Monique found the chatbot's language generally natural, she felt the responses were too direct and lacked the kind of empathetic listening that students need. She recommended a more organic flow of conversation, especially at the beginning of an interaction.
"Don’t start with options or categories, but ask for a story, and let the chatbot do the work of interpreting it."
She recommended using the “LSD” method (Listening, Summarizing, and Redirecting) that counselors use: starting with attentive listening before steering toward support options. This would prevent students from feeling forced into predetermined paths and allow for more authentic conversations.
Content must reflect more on students’ lives
Monique noted that the current sample chats felt too general. She stressed that the scenarios should reflect specific student challenges to really capture students' interests, such as BSA stress, difficult courses, housing or the experience of international students.
Safety
A major “red flag” would according to Monique be a mishandling or a misrecognition of the chatbot in urgent cases.
"A lonely student who hasn’t left the house for weeks? That needs to be picked up on. Otherwise, you risk missing something serious."
She recommended linking to external services like "moetiknaardedokter.nl", as well as emergency lines like 113 or 112, wherever appropriate.
Privacy
Moniqye personally had few privacy concerns, she acknowledged that TU/e does not currently use chat systems for a reason and that such a tool should be subject to proper data management and ethical review.
Stress reduction methods
Monique responded positively to the use of psychological methods such as CBT, Positive Psychology and MBSR. She encouraged the inclusion of references and links to additional sources so students could research the theory behind the advice if they wanted to. Monique was also very curious on the review of a professional psychologist on this topic.
End remark
“I think it’s a very fun and promising idea.”
TU/e Student Psychologist
In order to get a professional's opinion on the chatbot (particularly on the quality of its advice), we had conversation with Aryan Neele, a student psychologist at the TU/e. We wanted to try and let this conversation flow naturally, so we prepared some points of discussion and questions beforehand, but we kept the structure loose so as not to limit the feedback we would receive to areas we thought of beforehand.
To start, we informed Ms. Neele of our project; the problem we identified, our chatbot, its primary purposes, and the material used to train the chatbot (so far).
vertellen wat ze toen als eerst zei:
- privacy
- how to identify the severity of a user's problem?
After this, we showed Ms. Neele the chatbot, and typed in an example of a student struggling with stress for exams and fear of failure.
key points:
- explain why the user does certain exercises
- better than expected, good that the chatbot normalizes
- missing a piece of information (facial expressions, physical reactions) -> future research: how to tackle that?
- don't call it meditation exercises, but call it attention exercises
- ...
target audience question
usefulness for tue students question
issues:
- student mentions 'i dont know that i think' and later is asked to analyze their thoughts
-
Discussion
Recommendations
Conclusion
References
Enabling Open-Science Initiatives in Clinical Psychology and Psychiatry Without Sacrificing Patients’ Privacy: Current Practices and Future Challenges - Colin G. Walsh, Weiyi Xia, Muqun Li, Joshua C. Denny, Paul A. Harris, Bradley A. Malin, 2018
State of the art (25 articles): (3/4 pp, Maarten en Bridget 6)
Mila:
- Zhang, J., & Chen, T. (2025). Artificial intelligence based social robots in the process of student mental health diagnosis. Entertainment Computing, 52, 100799. https://doi.org/10.1016/j.entcom.2024.100799
- Eltahawy, L., Essig, T., Myszkowski, N., & Trub, L. (2023). Can robots do therapy?: Examining the efficacy of a CBT bot in comparison with other behavioral intervention technologies in alleviating mental health symptoms. Computers in Human Behavior: Artificial Humans, 2(1), 100035. https://doi.org/10.1016/j.chbah.2023.100035
- Jeong, S., Aymerich-Franch, L., Arias, K. et al. Deploying a robotic positive psychology coach to improve college students’ psychological well-being. User Model User-Adap Inter 33, 571–615 (2023). https://doi-org.dianus.libr.tue.nl/10.1007/s11257-022-09337-8
- Edwards, A., Edwards, C., Abendschein, B., Espinosa, J., Scherger, J. and Vander Meer, P. (2022), "Using robot animal companions in the academic library to mitigate student stress", Library Hi Tech, Vol. 40 No. 4, pp. 878-893. https://doi.org/10.1108/LHT-07-2020-0148 Sophie:
Sophie:
Velastegui, D., Pérez, M. L. R., & Garcés, L. F. S. (2023). Impact of Artificial Intelligence on learning behaviors and psychological well-being of college students. Salud, Ciencia y Tecnologia-Serie de Conferencias, (2), 343.
This article assesses how interaction with technology affect college student’s well-being. Educational technology designers must integrate psychological theories and principles in the development of AI tools to minimize the risks of student’s mental well-being.
Lillywhite, B., & Wolbring, G. (2024). Auditing the impact of artificial intelligence on the ability to have a good life: Using well-being measures as a tool to investigate the views of undergraduate STEM students. AI & society, 39(3), 1427-1442.
This article investigates the impact of artificial intelligence on the ability to have a good life. They focus on students in the STEM majors. The authors found a set of questions that might be good starting points to develop an inventory of students’ perspectives on the implications of AI on the ability to have a good life.
Pittman, M., & Reich, B. (2016). Social media and loneliness: Why an Instagram picture may be worth more than a thousand Twitter words. Computers in human behavior, 62, 155-167.
This article examines if there is a difference between image-based social media use and text-based media use regarding loneliness. The results suggest that loneliness may decrease, while happiness and satisfaction with life may increase with the usage of image-based social media. Text-based media use appears ineffectual. The authors propose that this difference may be due to the fact that image-based social media offers enhanced intimacy.
O’Day, E. B., & Heimberg, R. G. (2021). Social media use, social anxiety, and loneliness: A systematic review. Computers in Human Behavior Reports, 3, 100070.
This article examines the broad aspects of social media use and its relation to social anxiety and loneliness. It provides a better understanding of how more socially anxious and lonely individuals use social media. Loneliness is a risk factor for problematic social media use, and social anxiety and loneliness both have the potential to put people at a risk of experiencing negative consequences as a result of their social media use. More research needs to be done to examine the causal relations.
Bridget:
Socially Assistive Robotics combined with Artificial Intelligence for ADHD. (2021, 9 januari). IEEE Conference Publication | IEEE Xplore. https://ieeexplore.ieee.org/abstract/document/9369633
This paper presents a patient-centered therapy approach using the Pepper humanoid robot to support children with attention deficit. Pepper integrates a tablet for interactive exercises and cameras to capture real-time emotional data, allowing for personalized therapeutic adjustments. The system, tested in collaboration with a diagnostic center, enhances children's engagement by providing a non-intimidating robotic intermediary.
BetterHelp - Get started & Sign-Up today. (z.d.). https://www.betterhelp.com/get-started/?go=true&utm_source=AdWords&utm_medium=Search_PPC_c&utm_term=betterhelp_e&utm_content=161518778316&network=g&placement=&target=&matchtype=e&utm_campaign=21223585199&ad_type=text&adposition=&kwd_id=kwd-300752210814&gad_source=1&gclid=CjwKCAiAzba9BhBhEiwA7glbasw7PG1fxUn6i-hXi0CPIHNrD_1VB3F6SF7OPBtBi8pw0-H0ntSblRoCEa8QAvD_BwE¬_found=1&gor=start
Better help is an online therapy platform that eases access to psychological help.
'Er zijn nog 80.000 wachtenden voor u' | Zorgvisie
Fiske, A., Henningsen, P., & Buyx, A. (2019). Your Robot Therapist Will See You Now: Ethical Implications of Embodied Artificial Intelligence in Psychiatry, Psychology, and Psychotherapy. Journal Of Medical Internet Research, 21(5), e13216. https://doi.org/10.2196/13216
This paper assesses the ethical and social implications of translating embodied AI applications into mental health care across the fields of Psychiatry, Psychology and Psychotherapy. Building on this analysis, it develops a set of preliminary recommendations on how to address ethical and social challenges in current and future applications of embodied AI.
Kuhail, M. A., Alturki, N., Thomas, J., Alkhalifa, A. K., & Alshardan, A. (2024). Human-Human vs Human-AI Therapy: An Empirical Study. International Journal Of Human-Computer Interaction, 1–12. https://doi.org/10.1080/10447318.2024.2385001
This study examines mental health professionals' perceptions of Pi, a relational AI chatbot, in early-stage psychotherapy. Therapists struggled to distinguish between human-AI and human-human therapy transcripts, correctly identifying them only 53.9% of the time, while rating AI transcripts as higher quality on average. These findings suggest that AI chatbots could play a supportive role in mental healthcare, particularly for initial problem exploration when therapist availability is limited.
Holohan, M., & Fiske, A. (2021). “Like I’m Talking to a Real Person”: Exploring the Meaning of Transference for the Use and Design of AI-Based Applications in Psychotherapy. Frontiers in Psychology, 12. https://doi.org/10.3389/fpsyg.2021.720476
This article explores the evolving role of AI-enabled therapy in psychotherapy, particularly focusing on how AI-driven technologies reshape the concept of transference in therapeutic relationships. Using Karen Barad’s framework on human–non-human relations, the authors argue that AI-human interactions in psychotherapy are more complex than simple information exchanges. As AI-based therapy tools become more widespread, it is crucial to reconsider their ethical, social, and clinical implications for both psychotherapeutic practice and AI development.
Maarten:
Humanoid Robot Intervention vs. Treatment as Usual for Loneliness in Older Adults
This study investigates the effectiveness of humanoid robots in reducing loneliness among older adults. Findings suggest that interactions with social robots can positively impact mental health by decreasing feelings of loneliness.
Citation:
Bemelmans, R., Gelderblom, G. J., Jonker, P., & De Witte, L. (2022). Humanoid robot intervention vs. treatment as usual for loneliness in older adults: A randomized controlled trial. Journal of Medical Internet Research [1]
Enhancing University Students' Mental Health under Artificial Intelligence: A Narrative Review
This review discusses how AI-based interventions can be as effective as traditional therapy in managing stress and anxiety among students, offering convenience and less stigma.
Citation:
Li, J., Wang, X., & Zhang, H. (2024). Enhancing university students' mental health under artificial intelligence: A narrative review. LIDSEN Neurobiology, 8(2), 225. [2]
Artificial Intelligence Significantly Facilitates Development in the Field of College Student Mental Health
The article explores key applications of AI in student mental health, including risk factor identification, prediction, assessment, clustering and digital health.
Citation:
Yang, T., Chen, L., & Huang, Y. (2024). Artificial intelligence significantly facilitates development in the field of college student mental health. Frontiers in Psychology, 14, 1375294. [3]
A Robotic Positive Psychology Coach to Improve College Students' Wellbeing
This study examines the use of a social robot coach to offer positive psychological interventions to students, finding significant improvements in psychological well-being and mood.
Citation:
Jeong, S., Aymerich-Franch, L., & Arias, K. (2020). A robotic positive psychology coach to improve college students' well-being. arXiv preprint arXiv:2009.03829. [4]
Potential Applications of Social Robots in Robot-Assisted Interventions
This research discusses how social robots can be integrated into interventions to alleviate symptoms of anxiety, stress and depression by increasing the ability to regulate emotions.
Citation:
Winkle, K., Caleb-Solly, P., Turton, A., & Bremner, P. (2021). Potential applications of social robots in robot-assisted interventions. International Journal of Social Robotics, 13, 123–145. [5]
Exploring the Effects of User-Agent and User-Designer Similarity in Virtual Human Design to Promote Mental Health Intentions for College Students
The study examines how the design of virtual people can affect their effectiveness in promoting conversations about mental health among students.
Citation
Liu, Y., Chen, Z., & Wu, D. (2024). Exploring the effects of user-agent and user-designer similarity in virtual human design to promote mental health intentions for college students. arXiv preprint arXiv:2405.07418. [6]
Marie:
Artificial intelligence in mental health care: a systematic review of diagnosis, monitoring and intervention applications
This article reviews 85 relevant studies in order to find information about the application of AI in mental health in the domains of diagnosis, monitoring and intervention. It presents the methods most frequently used in each domain as well as their performance.
-> citation:
Cruz-Gonzalez, P., He, A. W.-J., Lam, E. P., Ng, I. M. C., Li, M. W., Hou, R., Chan, J. N.-M., Sahni, Y., Vinas Guasch, N., Miller, T., Lau, B. W.-M., & Sánchez Vidaña, D. I. (2025). Artificial intelligence in mental health care: a systematic review of diagnosis, monitoring, and intervention applications. Psychological Medicine, 55, e18, 1–52 https://doi.org/10.1017/S0033291724003295
An Overview of Tools and Technologies for Anxiety and Depression Management Using AI
This article evaluates the utilization and effectiveness of AI applications in managing symptoms of anxiety and depression by conducting a comprehensive literature review. It identifies current AI tools, analyzes their practicality and efficacy, and assesses their potential benefits and risks.
-> citation:
Pavlopoulos, A.; Rachiotis, T.; Maglogiannis, I. An Overview of Tools and Technologies for Anxiety and Depression Management Using AI. Appl. Sci. 2024, 14, 9068. https:// doi.org/10.3390/app14199068
Harnessing AI in Anxiety Management: A Chatbot-Based Intervention for Personalized Mental Health Support
This study analyzes the effectiveness of an AI-powered chatbot, made using ChatGPT, in managing anxiety symptoms through evidence-based cognitive-behavioral therapy techniques.
-> citation:
: Manole, A.; Cârciumaru, R.; Brînzas, , R.; Manole, F. Harnessing AI in Anxiety Management: A Chatbot-Based Intervention for Personalized Mental Health Support. Information 2024, 15, 768. https:// doi.org/10.3390/info15120768
Bram:
Child and adolescent therapy, 2006
PC Kendall, C Suveg
This book is about treating mental health issues in children and adolescence. Where especially chapters 1-5 and 7 are interesting to apply to a possible ai, which assists people with mental health issues. Specifically not chapter 6 as this is a more serious matter.
Psychotherapy and Artificial Intelligence: A Proposal for Alignment
Flávio Luis de Mello, Sebastião Alves de Souza
This article is about psychotherapy in artificial intelligence. They demonstrate this by implementing a their model of an artificial intelligence in psychotherapy on web application.
Perceptions and opinions of patients about mental health chatbots: scoping review
Alaa A Abd-Alrazaq, Mohannad Alajlani, Nashva Ali, Kerstin Denecke, Bridgette M Bewick, Mowafa Househ
this article looks at chatbots in mental health and reviews what patients think about them.
Appendix
Survey
Interview guide
The interview guide includes core questions that ensure consistency across interviews, while allowing flexibility for follow-up questions based on participants' responses.
Scenario a – participant interacts individually with the chatbot
Introduction and consent
- Briefly introduce the research project and purpose of the interview
- Highlight that participation is voluntary, responses are anonymous, participants can stop at any time without giving a reason and that they don’t have to answer a question if they don’t want to.
Participant background
- Can you tell me a bit about yourself? (year of study, program, etc.)
- Do you have any experience with mental health support devices/apps/websites?
- Have you ever used any mental health resources at TU/e?
Let the user interact with the chatbot for about 10 minutes.
Chatbot experience and usability
- What was your first impression of the chatbot?
- How easy or difficult was it to use?
- Did you feel comfortable sharing personal feelings/thoughts with the chatbot? Why or why not?
- Did you encounter any issues or frustrations while using the chatbot?
Empathy and conversational style
- How did the chatbot’s responses feel to you – empathetic, robotic, neutral? Can you give an example?
- Did the chatbot acknowledge emotions in a way that felt meaningful to you? If not, what was missing? If yes, in what ways did the chatbot do this?
- How natural did the chatbot’s responses feel to you?
- What could be improved in the way the chatbot communicates?
Functionality and user needs
- What features did you find most useful?
- What features were missing or could be improved?
- The chatbot is designed to serve three main functions:
- Providing self-help coping strategies
- Referring students to professional support
- Supporting students on mental health waiting lists
- Which of these is most important to you? Why?
- What additional functionality would make you more likely to use the chatbot?
Trust and credibility
- Would it help if the chatbot explicitly referenced psychological theories?
- Would you find it useful if the chatbot provided sources or explanations for its advice? Why or why not?
- What are some red flags that would make you doubt the chatbot’s credibility?
Privacy and data security
- Are there any privacy-related concerns that would stop you from using the chatbot entirely?
Acceptability and future use
- Are there any barriers that would prevent you from using it?
- Do you think an AI chatbot like this could be a valuable tool for TU/e students? Why or why not?
Closing
- Is there anything else you’d like to share?
- Thank you for participating!
Scenario b – participant observes a pre-made interaction with the chatbot
Introduction and consent
- Briefly introduce the research project and purpose of the interview
- Highlight that participation is voluntary, responses are anonymous, participants can stop at any time without giving a reason and they don’t have to answer all questions if they feel uncomfortable.
Participant background
- Can you tell me a bit about yourself? (year of study, program, etc.)
- Do you have any experience with mental health support devices/apps/websites?
- Have you ever used any mental health resources at TU/e?
Show the user the example conversation and give time to read through it.
Chatbot experience and usability
- Based on the example conversation, what were your first impressions/thoughts of the chatbot?
- If you were the user in the conversation, do you think you would feel comfortable engaging and sharing with the chatbot? Why or why not?
Empathy and conversational style
- Based on the example conversation, do you feel the chatbot responded in an empathetic and supportive way? Why or why not?
- Did the chatbot acknowledge emotions affectively? If not, what could be improved? If yes in what ways?
- How natural did the chatbot’s responses seem? Do they seem empathetic?
- If you had been the user in this interaction, what would you have wanted the chatbot to say to do differently?
Functionality and user needs
- Based on what you saw, which chatbot features seemed most useful?
- Which features felt missing or could be improved?
- The chatbot is designed to serve three main functions:
- Providing self-help coping strategies
- Referring students to professional support
- Supporting students on mental health waiting lists
- Which of these is most important to you? Why?
- What additional functionality would make you more likely to use the chatbot?
Trust and credibility
- Would it help if the chatbot explicitly referenced psychological theories?
- Would you find it useful if the chatbot provided sources or explanations for its advice? Why or why not?
- What are some red flags that would make you doubt the chatbot’s credibility?
Privacy and data security
- Are there any privacy-related concerns that would stop you from using the chatbot entirely?
Acceptability and future use
- Are there any barriers that would prevent you from using it?
- Do you think an AI chatbot like this could be a valuable tool for TU/e students? Why or why not?
Closing
- Is there anything else you’d like to share?
- Thank you for participating!
Consent form
ERB
File:ERB-form simple (1).odt File:Approval letter IEIS24.pdf