I'm sincerely saddened by the reports of tragedies surrounding artificial intelligence (AI).
Adam Raine died by suicide after allegedly receiving encouragement to do so from ChatGPT. The AI coached him how to kill himself and advised him not to disclose his emotional distress to his mother. A surely winning lawsuit impends. You can readily find more reporting of this as well, but here is a link to a ABC news report.
https://youtu.be/jBnJlwcnOBI?feature=shared
Another instance of ChatGPT running amok involved a man seeking medical advice and poisoning himself with Sodium Bromide per the recommendation of ChatGPT as a replacement for utilizing regular run-of-the-mill salt in his diet. Fortunately, he survived. This doctor in the video below said, "What's that quote? 'Common sense is not that common.'" On the one hand, this remark could be construed as holier-and-smarter-than-thou. It lacks compassion. On the other hand, I appreciate the candidness about how dangerous AI can be. In this world, it's a fine line between celebrating technological advancement and existing as a curmudgeon.
https://youtu.be/TNeVw1FZrSQ?feature=shared
I'm only slightly a curmudgeon. AI is not your doctor nor your therapist. I've observed people engaging with AI in this capacity increasingly. Be my doctor, therapist, bestie! Yet, there is no replacement for human connection nor the specialization of professionals in their field. AI might get there eventually, and we might be out of our jobs. That would stink. But, currently, it's not there.
Here's a funny anecdotal example: I got into a cordial fight with ChatGPT because I wanted to know the name of the Detroit Lions player that initiated a tussle with a Green Bay Packers player. The unsportsmanlike conduct call against the Pack left many scratching their heads. I uncharacteristically used ChatGPT seeking a quick answer. Instead, it argued with me. It said there was no such call. I kept trying to work with it by phrasing things differently to get the darn name.
Ultimately, ChatGPT acknowledged it was wrong, apologized, and thanked me for helping inform it. I have the screenshots! I probably should have simply abandoned the AI inquisition mission. But, given the news about horrific tragedies related to AI, I wanted to see just how wrong ChatGPT could be. It felt like an experiment at that point. I guess I had time that day. Maybe I need a new hobby.
I share this anecdote to demonstrate how absurdly unreliable and inaccurate ChatGPT and all AI can be. If it can't even tell me basic, factual information and chooses to be sassy while downright gaslighting me about FOOTBALL, I highly advise you to reconsider using AI for medical or therapeutic advice.
Overall, I'm a real person that will genuinely feel compassion for you that AI can't generate. Please choose talking to me or another licensed mental health clinician, or a doctor. We don't want anyone dying by suicide nor poisoning themself around here if it can be utterly prevented. Your life matters! AI can certainly be a helpful tool to advise coping skills and a plethora of information. But AI is not your therapist nor doctor. I could be your therapist though :)