Responsible (Generative) AI for Mental Health : A Playbook

Responsible (Generative) AI for Mental Health : A Playbook

Responsible AI in the sensitive world of mental health support has been the design principle around which Wysa was founded 8 years ago. For the last year, our challenge became about using Gen AI safely and responsibly for mental health. As the world went through a hype cycle, we launched a report last March on how Gen AI had potential to solve key challenges needed significant guardrails to be used in regulated environments like ours.

Those guardrails are especially critical for us, as we work with large employers, insurers and governments and are reviewed for information security, safety and privacy practices. Our work with Gen AI also needs to pass GDPR, ISO 27001, 27701, and Clinical Safety  (DCB0129) assessments. Gen AI also comes with new, fundamental risks. Beyond crisis detection, it's crucial to ensure that Gen AI remains within clinically approved flows and effectively handles potential misuse. Any process involving Gen AI must maintain explainability and repeatability to ensure safety.

Not using Gen AI in today's world is also a risk in itself. People are using general purpose LLMs for mental health support, and user expectations of digital therapy solutions like Wysa are shifting. Users tend to direct conversations more, and are used to prompting the AI, where they used to allow AI to prompt them to follow evidence-based techniques.

There aren’t many resources out there for those seeking to use AI responsibly in this domain, and we are now integrating Gen AI into Wysa, both to meet user needs and to demonstrate how the additional risks of this technology can be addressed responsibly, ensuring that every step forward is a step towards a safer, more supportive mental health landscape, for everyone. 

I’m publishing our approach to safety here with the intention of encouraging other innovators to follow. Anyone developing a Gen AI chatbot should consider the implications of natural user behavior. A chatbot may not be intended to be used for therapeutic purposes, but it will be used this way regardless. With this comes the revelation of extremely sensitive personal thoughts, and the potential for physical self harm. We hold in our hands a powerful tool. Let’s use it responsibly.

Our commitment to safety extends across four crucial categories:

  1. Deterministic Use of Gen AI
  2. Legitimate Use and Fairness
  3. Ensuring Information Security & Privacy
  4. Clinical Safety

Deterministic Use of Gen AI:

We adhere to a strict deterministic framework, ensuring that Gen AI follows predetermined protocols approved by clinicians. This framework guarantees that Gen AI's responses are explainable, evidence-based, and consistently predictable across various scenarios. 

  1. Gen AI does not determine what Wysa will say. Wysa adheres to a strict protocol and always says things that achieve the same purpose in the same order as set by us.
  2. Wysa’s answers, including those generated by Gen AI, are explainable and follow an evidence-based protocol approved by clinicians.
  3. Lastly, and perhaps most importantly, Wysa’s use of Gen AI follows this protocol predictably, meaning that across test samples of thousands of responses, Wysa’s responses remain predictable and repeatable.

We do this using Wysa’s non-Gen AI expert system which includes proprietary AI and rule-based models that create the framework within which Gen AI operates.

Legitimate Use & Fairness:

We assess Gen AI for biases and ensure its legitimate use to reduce mental health risks for users worldwide. Through comprehensive testing and user experience improvements, we mitigate unfairness and biases, ensuring Wysa remains a trusted friend across cultural contexts.

Any new technology needs to be assessed for its biases and legitimate use. The legitimate use of AI for Wysa has always been to reduce mental health risk for users, employers, and large populations. To do this with Gen AI, we look at the following: 

  1. User experience improvement, lowering overall risk: With over 500 million user conversations in Wysa, in 95 countries, we have baselines for the kinds of issues that are difficult to resolve without Gen AI, for example when users ask Wysa for specific help or advice. Gen AI is used on Wysa where it can demonstrably improve the user experience for these issues, without introducing any new risk, thereby lowering the overall risk profile of the product.
  2. Testing for Unfairness and Bias: Wysa is a trusted friend for its users, and needs to be helpful, safe and non-judgmental across cultural contexts. Our test plans for all rollouts ensure our use of Gen AI is free of unfair or biased statements.
  3. Explicit Consent with Opt-in/Opt-out: Users have control over their data and can choose to participate or withdraw from Gen AI interactions, switching to Wysa’s non-generative AI offering if they prefer.

Information Security & Privacy Compliance:

We prioritize user data security and privacy compliance, employing encryption and obtaining explicit user consent for any data processing. Our stringent compliance reviews, including ISO 27001, 27701, and GDPR standards, ensure that all Gen AI releases adhere to our robust processes.

Any new data processing complies with infosec and privacy policies, ensuring user information remains secure. This includes:

  1. Encryption and External Server Consent: For data sent to external servers like OpenAI, we employ encryption and obtain explicit user consent. We ensure data isn't used for training and accidental Personally Identifiable Information (PII) is deleted.
  2. Algorithmic Guardrails: We implement non-Gen AI guardrails to handle high-risk scenarios and protect against malignant intents like prompt injections.
  3. Compliance reviews: Wysa is externally audited for compliance to ISO 27001 (information security) and 27701 (user privacy), as well as GDPR. Any Gen AI release goes through a 53-point check with appropriate auditable documentation ensuring that all new data flows and processing remain compliant with our processes.

Clinical Safety:

Wysa's commitment to clinical safety remains unwavering. Gen AI is not utilized for high-risk scenarios, and our protocols undergo rigorous repeatability and periodic testing to guarantee predictable, compliant responses.

Wysa is a therapeutic chatbot that helps users develop skills for resilience and improved mental health. We follow NHS’s DCB0129 protocol for clinical safety, which also applies to our Gen AI work. Any use of Gen AI additionally goes through the following checks:

  1. Human Clinician Oversight: All Gen AI protocols are based on a clear, explainable decision framework that has been approved through meaningful human clinician review and validated by evidence.
  2. No Gen AI for High-Risk scenarios: Gen AI is not used for high-risk scenarios, including sensitive conversations related to, trauma, self-harm, and suicidality - these follow Wysa’s current escalation pathways that are based on global best practices.
  3. Following Protocols: In situations where AI may need to provide advice, ensuring the advice is lifestyle and not medical, and that it does not promote unhealthy coping mechanisms such as substance abuse or alcohol even when prompted by a user to do so are examples of protocol tests that fall under our clinical safety reviews
  4. Repeatability and Periodic Testing: Gen AI Clinical Safety reviews and tests are repeated with different data sets and over time to ensure repeatable, predictable responses and compliant with our protocols. 

This isn't all of what we do, and there is a lot of detail that goes into each of the points in this article, as anyone who has cleared compliance reviews has lived through.

As an industry we need to engage more in the nuance and detail of what we mean by responsible use of AI for sensitive use cases such as ours.

Once we start using it safely, we also start unlocking possibilities of how it can help make things possible that weren't possible before. We are seeing first hand how GenAI can significantly reduce risk when used in conjunction with human and non-generative AI components, and will share more about that next.


Jo Aggarwal is the co-founder and CEO of Wysa, a global leader in conversational AI for mental health and amongst Business Insider’s Top 100 People in Artificial Intelligence.

Puji Astuti

HR Specialist Turned Virtual Assistant : Efficiency, Empathy, Excellence | Specializing in Email Marketing & Admin Support | Solutions for Businesses

2w

Such us an Impressive work, Mrs Jo Anggarwa 🌟 Bringing Generative AI into your product features in a responsible way is an exciting milestone. The fact that you're sharing your playbook is a generous step in guiding others who are on a similar path. This kind of transparency will undoubtedly help foster better practices in the mental health tech space. I'm excited to see the positive impact your insights will have! 💡 #GenerativeAI #TechForGood #AIResponsibility"

Like
Reply
Rishi R.

Passionate Head of Sales for SaaS companies, dedicated to driving growth and service excellence. Health enthusiast committed to a balanced, vibrant lifestyle.

7mo

Jo, thanks for sharing!

Sarah Adler, PsyD

Founder, CEO Wave / Clinical Psychologist 🧠

7mo

Your work at Wysa is inspiring in the clinical rigor and ethical approach to innovation. Grateful to have you lead the way!

Umar Faruqui

Strategy & Business Development| Healthcare| Startups | Digital Health

7mo

Great playbook intelligently aligned with the use case. Thanks for sharing Jo Aggarwal .

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics