I think this is a well-balanced examination of using AI as a substitute for therapy.
On one hand, we’ve all seen the horror stories of how badly it can go.
On the other hand, AI is often the only option available for many people who can’t access traditional therapy.
It’s not an easy answer.
https://www.independent.co.uk/news/world/americas/chatgpt-ai-therapy-suicide-psychosis-b2816822.html
What would be beneficial is an adult conversation about establishing guardrails around AI tools to mitigate worst-case scenarios, while also planning ways to ensure that more people don’t have to rely on them for therapy because they have no other options.
Alas, in the world we live in, we would likely ban it for any therapeutic use, which I’m not sure how we’d even enforce, alongside demanding that tech geniuses be unfettered by rules when creating the next big thing.
And by big thing, of course, I mean the thing that is going to make a select handful of people even wealthier than they already are.
So either AI with zero protections, or no AI at all. Those seem to be our options.
Oh, and there’s no discussion about making therapy more accessible in any of this, have I got that right?

