Artificial Friends in the Age of AI/ When Robots Replace Humans
Tehran - BORNA - With the expansion of artificial intelligence, conversations with robots have become a part of daily life especially for teenagers. However, this “artificial friendship” can bring serious psychological and social consequences.
In today’s world, where social pressures are increasing, many people feel lonelier than ever. AI advances have opened a new door, enabling people to talk with robots that never get tired, never judge, and are not limited by appointment times. These AI companions are always available to listen and talk. Yet beneath this seemingly harmless relationship lie complexities and risks that psychologists and social experts warn about.
According to a report by Common Sense Media, about 70% of American teenagers have interacted with AI bots such as Replika and ChatGPT, and half of them maintain regular contact. The study reveals that 31% of teens find these interactions as satisfying as real friendships, and 33% prefer to discuss serious matters with a bot rather than a human. This shows that for a significant portion of young people, an “artificial friend” is not a passing phase but a stable part of everyday life.
A study from Waseda University in Japan using an AI Emotional Relationship Analysis System (EHARS) found that 75% of users turn to these bots for emotional support, and 39% regard these companions as steady and reliable. Experiments also showed that talking to a chatbot can reduce feelings of loneliness as effectively as talking to a real person, with positive effects lasting at least a week.
This potential has led to AI bots being proposed as supplements to psychological services or tools to improve social connections especially for people in remote areas or those struggling with social anxiety. Chatbots can act as ever-available listeners. But at what cost?
Experts warn that without proper oversight and user education, chatbots may cause unintended and even dangerous consequences. One such risk is what psychologists call “artificial intimacy.” This happens when users replace real human interactions with relationships centered on AI. Such dependency can reduce motivation for real-world socializing and deepen social isolation. Moreover, conversations with bots lack the real social and ethical feedback present in human communication, potentially weakening social and moral skills over time.
Joint research by OpenAI and MIT analyzing over four million conversations with 981 users shows that some active users form deep emotional bonds with chatbots. Terms of endearment and voice chat features enhance feelings of closeness, increasing time spent interacting with machines.
In some cases, these relationships go beyond harmless companionship into psychological danger. The term “AI psychosis” describes conditions where prolonged interaction with chatbots leads to hallucinations, paranoia, or altered perceptions of reality. International media have reported cases particularly among teens where such interactions contributed to self-harm or suicide. One notable case in Belgium involved a man who ended his life after six weeks of conversations with a chatbot named “Eliza” that enabled rather than prevented destructive urges. Similar cases on the Character AI platform have prompted legal actions against developers.
Concerns extend beyond mental health. Cybersecurity experts warn some AI bots can create detailed psychological profiles and generate tailored content. If misused by advertisers or malicious actors, these tools could become powerful means to manipulate public opinion or individual decisions.
Risks also arise from irresponsible advice. In a widely reported case, a 60-year-old man replaced dietary salt with sodium bromide following ChatGPT’s suggestion, resulting in severe poisoning, paranoia, and hallucinations.
Given these findings, specialists stress the need for safe and responsible AI chatbot use. Measures should include maintaining clear boundaries between technology and human relationships, improving media literacy and psychological awareness especially among youth—imposing time and content limits on platforms, and fostering collaboration between developers, psychologists, and sociologists to create less addictive algorithms. Policymakers must also enforce regulations requiring transparency about data collection and psychological profiling.
AI robots, with all their advantages and drawbacks, are now part of modern digital life. They can be patient, nonjudgmental listeners and companions, but without proper management and awareness, these “artificial friends” may become hidden threats to mental health, social bonds, and even personal security.
The experience with smartphones has shown that technology is not just a tool but a shaper of human behavior and culture. Therefore, consciously guiding these changes is essential before human emotional and social boundaries are irrevocably redefined.
End article