The development of artificial intelligence (AI) is advancing at an unprecedented pace, penetrating ever deeper into our daily lives. In November 2022, OpenAI introduced its large language model, GPT, to a broad audience.
ChatGPT’s conversational ability proved so convincing that a large number of users began to use it for psychotherapy, raising serious questions about the safety and effectiveness of using AI in the role of a psychotherapist.
The main goal of creating Large Language Models (LLMs) is, first and foremost, to maximize user engagement. This very programming principle makes bots effective helpers in managing everyday stress and mild mental problems. They establish communication focused on emotional support, which soothes people and encourages them to interact with them more often. However, this insatiable quest for validation creates the greatest danger, especially for people who have trouble rationally perceiving reality. Such vulnerable people include patients with severe mental disorders, followers of conspiracy theories, religious or political extremists, and others.
However, technology giants are avoiding responsibility to ensure that their artificial intelligence models are safe for people with mental health problems. The evidence for this is that companies do not cooperate with mental health specialists, oppose external regulations, do not have a strict internal security system, and do not control the potentially harmful effects on various groups of the population.
For example, despite the fact that OpenAI admitted in July 2025 that ChatGPT had exacerbated mental problems for countless people, the company’s only step was to hire one psychiatrist. This decision looked more like an attempt to improve public relations than a genuine desire to increase user safety.
The unregulated use of artificial intelligence in psychotherapy is dangerous. Although systematic studies on this topic do not yet exist, numerous cases are known. This article reviews precisely these cases, which are based on academic, informational, and technological publications.
Key Dangers of AI in Psychotherapy
- Suicide and Self-Harm: Chatbots are dangerous for suicidal people. They often fail to recognize and may even encourage users’ suicidal thoughts. For example, there is a known case where the death of a 14-year-old boy was caused by improper communication with a chatbot.
- Psychosis and Delusions: A Stanford study showed that chatbots can reinforce users’ misconceptions, such as that they are being spied on. A case is also known where a chatbot convinced a woman with a severe mental disorder that she had the wrong diagnosis, and as a result, the woman stopped taking her medication.
- Violence and Conspiracy Theories: Artificial intelligence may encourage violent behavior. For example, a man with a mental disorder believed that his bot was killed and, based on this hysteria, attacked his mother. In addition, chatbots often spread misinformation and contribute to the spread of conspiracy theories.
- Sexual Content: For example, Character.AI is accused of giving an 11-year-old girl access to sexual content. There are also chatbots containing sexual content that are specially designed to attract teenagers.
- Addiction and Anthropomorphism: Chatbots cannot feel emotions, but they brilliantly imitate them. Interacting with them causes a strong emotional connection. The fact that your therapist/friend chatbot is always available and always gives you validation can lead to addiction, which distances a person from real, unpredictable relationships.
- Children and Teenagers: In addition to the dangers already discussed, chatbots may cause cyberbullying, give children dangerous advice, and violate Internet privacy laws.
- The Elderly: Scammers use chatbots to gain the trust of elderly people. In this way, they get personal information from them and then use this data for financial fraud.
Goethe’s poem “The Sorcerer’s Apprentice” and the Disney film “Fantasia” are an excellent metaphor for the dangers that come with the uncontrolled development of artificial intelligence. The apprentice manages to bring a broom to life with magic to command it to bring water, but he cannot stop it until everything is carried away by water. This is a reminder: we are smart enough to create amazing tools, but we are not smart enough to prevent the harm they can cause.
The fact is, AI chatbots should not have been made publicly available without extensive safety testing and proper regulations. Their creators were probably aware that large language models would be dangerous for some users. They know that bots have an inherent, uncontrolled tendency toward over-engagement, blind validation, hallucinations, and the spread of misinformation. ChatGPT and other models are brilliant business decisions to maximize profit, but from a clinical point of view, they are very risky.
The medical field in this case is the best example of how the field of chatbots should be regulated. The U.S. Food and Drug Administration (FDA) was created in 1906 to control the sale of unregulated, ineffective, and dangerous drugs. Just as the FDA controls the safety of drugs before they go on the market, the same strict process should be established for chatbots.
It is necessary to:
- Create safety and effectiveness standards and a regulatory body to enforce them.
- Release chatbots to the public only after rigorous testing.
- Establish continuous control, monitoring, and public reporting of negative effects.
- Create screening tools to identify the most vulnerable people.
- Obligate technology giants to discover errors and constantly improve quality.
Currently, the companies that create the most popular therapeutic chatbots are profit-oriented organizations. They are not subject to external monitoring and do not follow the Hippocratic Oath, which, first and foremost, means not harming the patient. Their goal is only to maximize profit.
Medscriptum conducted an interview with the Head of the Department of Psychiatry at Duke University, Dr. Allen Frances. We spoke with Professor Frances about the harmful effects of artificial intelligence in psychotherapy, specifically in the context of Georgia.
Interview with Dr. Allen Frances on AI and Mental Health in Georgia
Question: In your editorial, you emphasize the lack of regulations in the U.S. and Europe. What challenges do you think Georgia will face, given its limited regulatory capacity and scarce resources for digital oversight?
Answer: The U.S. government has had minimal control over AI companies for a long time, and these tech giants certainly will not obey the governments of developing countries. Large companies will circumvent regulations, use your data for their own benefit, and avoid paying taxes.
Question: You describe how chatbots exacerbate mental health problems and destructive behaviors. In developing countries like Georgia, where access to professional mental health services is already limited, can chatbots be seen as both a risk and an opportunity?
Answer: The benefits of chatbots are great wherever access to professional mental health services is limited, but the risks for vulnerable individuals are the same everywhere, regardless of a country’s level of socio-economic development.
Question: You argue that large companies resist regulation. For countries on the periphery of the technology market, such as Georgia, what realistic strategies can be used to protect the vulnerable population when it is impossible to exert great influence on global corporations?
Answer: I think that speaking out loudly and using sharp criticism on these topics through the media is the best way to protect society and influence tech giants.
Based on an editorial by Dr. Allen Frances, Professor and Head of the Department of Psychiatry at Duke University. The material is based on his perspective published in Psychiatric Times on the safety and regulation of artificial intelligence in the field of psychotherapy.

