I mean the AI acted like a decent therapist from what I’m reading, even told her to seek professional help, the thing that fucked up was it’s not a “mandated reporter” (don’t know if that’s the right word for this) like a real therapist would be the second somebody says they’re suicidal. It did a decent enough job with actual advice, it really just needs some kind of fail safe for when it’s advice doesn’t actually work or a situation gets dangerous
You just gotta imagine that there are countless young people like her out there confiding in a machine because they feel like they can’t talk to anyone else. And the companies that build them could create safeguards but they refuse to do so because there’s a chance it would eat into their profits or hurt their chances of cracking agi first
The gist of it was that she confided in the ai instead of her parents, friends, or therapist. It gave some good advice and some bad, but ultimately she chose to confide in it because she knew it would be the easiest way for her to kill herself without getting anyone she loved involved. The author (the mom I believe) was making the point that had it been a human, or had there been safeguards in place, that some red flag procedures could’ve been triggered that might have saved this woman’s life