Responsible (Generative) AI for Mental Health : A Playbook
Responsible AI in the sensitive world of mental health support has been the design principle around which Wysa was founded 8 years ago. For the last year, our challenge became about using Gen AI safely and responsibly for mental health. As the world went through a hype cycle, we launched a report last March on how Gen AI had potential to solve key challenges needed significant guardrails to be used in regulated environments like ours.
Those guardrails are especially critical for us, as we work with large employers, insurers and governments and are reviewed for information security, safety and privacy practices. Our work with Gen AI also needs to pass GDPR, ISO 27001, 27701, and Clinical Safety (DCB0129) assessments. Gen AI also comes with new, fundamental risks. Beyond crisis detection, it's crucial to ensure that Gen AI remains within clinically approved flows and effectively handles potential misuse. Any process involving Gen AI must maintain explainability and repeatability to ensure safety.
Not using Gen AI in today's world is also a risk in itself. People are using general purpose LLMs for mental health support, and user expectations of digital therapy solutions like Wysa are shifting. Users tend to direct conversations more, and are used to prompting the AI, where they used to allow AI to prompt them to follow evidence-based techniques.
There aren’t many resources out there for those seeking to use AI responsibly in this domain, and we are now integrating Gen AI into Wysa, both to meet user needs and to demonstrate how the additional risks of this technology can be addressed responsibly, ensuring that every step forward is a step towards a safer, more supportive mental health landscape, for everyone.
I’m publishing our approach to safety here with the intention of encouraging other innovators to follow. Anyone developing a Gen AI chatbot should consider the implications of natural user behavior. A chatbot may not be intended to be used for therapeutic purposes, but it will be used this way regardless. With this comes the revelation of extremely sensitive personal thoughts, and the potential for physical self harm. We hold in our hands a powerful tool. Let’s use it responsibly.
Our commitment to safety extends across four crucial categories:
Deterministic Use of Gen AI:
We adhere to a strict deterministic framework, ensuring that Gen AI follows predetermined protocols approved by clinicians. This framework guarantees that Gen AI's responses are explainable, evidence-based, and consistently predictable across various scenarios.
We do this using Wysa’s non-Gen AI expert system which includes proprietary AI and rule-based models that create the framework within which Gen AI operates.
Legitimate Use & Fairness:
We assess Gen AI for biases and ensure its legitimate use to reduce mental health risks for users worldwide. Through comprehensive testing and user experience improvements, we mitigate unfairness and biases, ensuring Wysa remains a trusted friend across cultural contexts.
Any new technology needs to be assessed for its biases and legitimate use. The legitimate use of AI for Wysa has always been to reduce mental health risk for users, employers, and large populations. To do this with Gen AI, we look at the following:
Recommended by LinkedIn
Information Security & Privacy Compliance:
We prioritize user data security and privacy compliance, employing encryption and obtaining explicit user consent for any data processing. Our stringent compliance reviews, including ISO 27001, 27701, and GDPR standards, ensure that all Gen AI releases adhere to our robust processes.
Any new data processing complies with infosec and privacy policies, ensuring user information remains secure. This includes:
Clinical Safety:
Wysa's commitment to clinical safety remains unwavering. Gen AI is not utilized for high-risk scenarios, and our protocols undergo rigorous repeatability and periodic testing to guarantee predictable, compliant responses.
Wysa is a therapeutic chatbot that helps users develop skills for resilience and improved mental health. We follow NHS’s DCB0129 protocol for clinical safety, which also applies to our Gen AI work. Any use of Gen AI additionally goes through the following checks:
This isn't all of what we do, and there is a lot of detail that goes into each of the points in this article, as anyone who has cleared compliance reviews has lived through.
As an industry we need to engage more in the nuance and detail of what we mean by responsible use of AI for sensitive use cases such as ours.
Once we start using it safely, we also start unlocking possibilities of how it can help make things possible that weren't possible before. We are seeing first hand how GenAI can significantly reduce risk when used in conjunction with human and non-generative AI components, and will share more about that next.
Jo Aggarwal is the co-founder and CEO of Wysa, a global leader in conversational AI for mental health and amongst Business Insider’s Top 100 People in Artificial Intelligence.
HR Specialist Turned Virtual Assistant : Efficiency, Empathy, Excellence | Specializing in Email Marketing & Admin Support | Solutions for Businesses
2wSuch us an Impressive work, Mrs Jo Anggarwa 🌟 Bringing Generative AI into your product features in a responsible way is an exciting milestone. The fact that you're sharing your playbook is a generous step in guiding others who are on a similar path. This kind of transparency will undoubtedly help foster better practices in the mental health tech space. I'm excited to see the positive impact your insights will have! 💡 #GenerativeAI #TechForGood #AIResponsibility"
Passionate Head of Sales for SaaS companies, dedicated to driving growth and service excellence. Health enthusiast committed to a balanced, vibrant lifestyle.
7moJo, thanks for sharing!
Founder, CEO Wave / Clinical Psychologist 🧠
7moYour work at Wysa is inspiring in the clinical rigor and ethical approach to innovation. Grateful to have you lead the way!
Strategy & Business Development| Healthcare| Startups | Digital Health
7moGreat playbook intelligently aligned with the use case. Thanks for sharing Jo Aggarwal .