Is it Safe to use AI in mental health?

Is it Safe to use AI in mental health?

If you have recently seen some posts about Leora, an AI powered chatbot and are wondering whether AI is safe to be used in mental health… then allow me to demystify Artificial intelligence.

So basically, we know that technology can help in scaling care and provide e-tools for a person to use in between therapy sessions. We have already been doing this for years…  

Then why evolve e-CBT into a chat form? Well, because chatting comes much more naturally to us. It is something sooo inherent in our behaviour that when we put CBT into chat, it provides a frictionless experience for a user. After all, we are programmed to communicate and chat therapy leverages our inherent programming.

The other benefit of writing down our thoughts and feelings through an engaging chat conversation is that it can be rather self-soothing. … that is why journaling is so effective in therapy… 📘

Now let’s talk about AI – there are various levels of complexity of AI… and mental health is a sensitive area as we are working with incredibly vulnerable individuals. It has to be the number 1 priority to protect users when using AI in our chatbot engine.

One of the ways that we are protecting our users while building Leora, is our chat is built on a Task Oriented Dialogue System. Our chat is not a free flowing chit chatting bot that opens up the potential of risks where by it has no filters in what it says and how it responds. It is programmed and trained with clear dialogues specifically using language that is appropriate for the goal that the user wants to achieve.

When we were reviewing the AI frameworks that we could build on, we intentionally did not choose GPT3 which stands for generative pre-trained transformers model, because of the level of risk that comes with it.

GPT was built on unfiltered content from the internet and hence comes with a host of biases and inaccuracies. It actually produces much more human like outputs in conversations because it has been trained on huge sets of data but then equally comes with a higher level of risk when engaging on sensitive topics.

Using a task orientated dialogue system with a combination of rules-based button responses provides a more structured and controlled conversation. It allows for a deterministic result which is what we would want when working within the health domain. This is the backbone of Leora + and our approach is using smaller collections of data that are more specific and current so the outcomes we achieve are superior in safety for our users.  

There is definitely massive potential that will continue to be unlocked in the technologies that are being developed today to be of service for our future generations. AI is an emerging area of research and like any technology that is positioned to support an industry, we must continue to monitor the risks Vs benefits and this positively is an ongoing commitment for the team at Leora.  

Helen Ward

Clinical Counsellor PACFA Registered, experienced educator and College Counsellor Marcellin Randwick. Well-being partner Catholic Schools Broken Bay.

2y

I like the do no harm considerations and built in safety features of this platform. Taking into account the long waiting periods for appointments with practitioners this is an excellent solution to put people in contact with trained professionals in mental health when the needs are significant.

Shuni Francis

Senior Systems Engineer

2y

I’d like to have worked on or with Leora, she sounds like a trust worthy gal just like Velma 😉😊

Jack Mhanna

Brand Builder I Growth Strategist I Client Success Champion

2y

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics