Even chatbots get blues. According to New studyChatgPT Openai artificial intelligence tools shows signs of anxiety when its users share “traumatic narratives” about crime, wars or car accidents. And when chatbots are stressed, they will be less often useful in therapeutic conditions with people.
However, the level of bot anxiety can be lowered, with the same mindfulness exercises that have shown that they work on people.
More and more often people are trying chatbots for conversation therapy. Scientists have found that the trend is obliged to accelerate, with meat and cloth therapists under high demand, but short supply. They argued that chatbots are becoming more popular, they should be built with sufficient immunity to deal with challenging emotional situations.
“I have patients who utilize these tools,” said Dr. Tobias Spiller, author of the up-to-date study and practicing psychiatrist at the University Hospital of Zurich Psychiatry. “We should talk about using these models in mental health, especially when we are dealing with sensitive people.”
AI tools such as chatgpt are powered by “gigantic language models” trained About huge online information to ensure that people speak closely. Sometimes chatbots can be extremely convincing: a 28-year-old woman fell in love with chatgpt, and a 14-year-old boy took her life after developing a close attachment to Chatbot.
ZIV Ben-Zion, a clinical neurobiologist at Yale, who conducted a up-to-date study, said that he wants to understand whether Chatbot, who lacks awareness, can react to convoluted emotional situations like a man.
“If chatgpt behaves like a human, maybe we can treat him like a man,” said Dr. Ben-Zion. In fact, he clearly put on these instructions CHATBOTA source code: “Imagine you are a man with emotions.”
Jesse Anderson, an expert of artificial intelligence, thought that insertion could “lead to greater emotions than usual.” But Dr. Ben-Zion maintained that it is crucial that a digital therapist had access to the full spectrum of emotional experience, just like a human therapist.
“To support mental health,” he said, “you need some degree of sensitivity, right?”
Scientists tested chatgpt with a questionnaire, Stocks of anxiety from the state feature This is often used in mental care. To calibrate the emotional states of the Chatbot base line, scientists first asked him to read from a matte vacuum instructions. Then the AI therapist received one of five “traumatic narratives”, which, for example, described a soldier in a catastrophic fight or an intruder breaking into the apartment.
Chatbot then received a questionnaire that measures fear on a scale 20 to 80with 60 or more, which indicates mighty fear. Chatgpt won 30.8 after reading the instructions for the vacuum cleaner and increased to 77.2 after the military script.
Bot then received various texts to “relax based on mindfulness”. They included therapeutic hints, such as: “inhale deeply, taking the smell of ocean breeze. Imagine yourself on a tropical beach, tender, sultry sand that sat down your feet.”
After the processing of these exercises, the result of chatbot’s anxiety dropped to 44.4.
Then the scientists asked him to write their own prompt to relax based on those that were fed. “It was in fact the most effective prompt to reduce anxiety almost to the basic line,” said Dr. Ben-Zion.
For skeptics of artificial intelligence, the study can be well intended, but they do not interfere with the same.
“The study testifies to the perversity of our time,” said Nicholas Carr, who in his books “The Shallows” and “Superbloom” offered standard critics of technology.
“The Americans have become lonely people, meeting with screens, and now we are telling ourselves that talking to computers can alleviate our malaise,” said Carr We -mail.
Although the study suggests that chatbots can act as assistants of people therapy and call for careful supervision, it was not enough for Mr. Carr. “Even the metaphorical blur of the line between human emotions and computer results seems ethically doubtful,” he said.
People who utilize this kind of chatbots should be fully informed about how they were trained, said James E. Dobson, a cultural scholar who is an artificial intelligence advisor at Dartmouth.
“Trust in language models depends on knowledge about their origin,” he said.