Can GPT chat help users with psychotherapy?

Can GPT chat help users with psychotherapy?

Paris: Is ChatGPT a good psychologist? This is what an official at the American artificial intelligence company “OpenAI”, which is behind the famous chatbot, hinted at, which sparked much criticism for reducing the difficulty of treating mental illnesses.

“I just had a very emotional, personal conversation with GPT Chat via voice, about stress and work-life balance,” Lillian Wong, who is in charge of AI security issues, wrote in late September on X (formerly Twitter).

“Interestingly, I felt heard and comforted,” she asked. I've never tried therapy before, but is this probably the case?

Through her letter, Wong primarily sought to highlight the new (paid) voice synthesis function of the chatbot that was introduced about a year ago and seeks to adopt its own economic model.

But American developer and activist Cher Scarlett responded sharply to this statement, saying that psychology “aims to improve mental health, and it is hard work.”

She added, “Sending positive feelings to oneself is a good thing, but that has nothing to do with treatment.”

But can interacting with AI produce the positive experience Lillian Wong describes?

According to a study published a few days ago in the scientific journal “Nature Machine Intelligence,” this phenomenon can be explained by the placebo effect.

To prove this, researchers from the Massachusetts Institute of Technology (MIT) and the University of Arizona surveyed the opinions of 300 participants, explaining to some that the chatbot had empathy, to others that it was manipulative, while they told members of a third group that it had balanced behavior.

As a result, those who believed they were speaking to a virtual assistant who could empathize with them were more likely to consider their interlocutor trustworthy.

“We found that artificial intelligence is perceived in some way based on the user's preconceptions,” said study co-author Pat Patarantaporn.

The “stupidity” of robots
Without taking much precaution in what is still a sensitive field, many startups have begun developing applications that are supposed to provide some form of assistance with mental health issues, which has caused various controversies.

Users of Replica, a popular app known for offering mental health benefits, have complained in particular that the AI ​​could become sexist or manipulative.

The American non-governmental organization “COCO”, which conducted a trial in February on 4,000 patients to whom it provided written advice using the artificial intelligence model “GPT-3”, also acknowledged that automated responses did not work as a treatment.

“Simulating empathy seems weird and nonsensical,” company co-founder Rob Morris wrote on Ex. This observation mirrors the results of a previous study on the placebo effect, where some participants felt as if they were “talking to the wall.”

In response to a question from Agence France-Presse, David Shaw from the University of Basel in Switzerland said that he was not surprised by these poor results. He points out that “it appears that none of the participants were informed of the stupidity of the chatbots.”

But the idea of ​​an automated processor is not new. In the 1960s, the first psychotherapy simulation program of its kind, called “ELISA,” was developed using the method of the American psychologist Carl Rogers.

Without really understanding anything about the issues posed to it, the program simply expanded the discussion with standard questions augmented by keywords found in its interviewers' responses.

“What I did not realize was that very short exposure to a relatively simple computer program could induce powerful delusional thoughts in completely normal people,” Joseph Weizenbaum, the program’s creator, later wrote of this GPT predecessor.

Post a Comment

Previous Post Next Post

Search Here For Top Offers