The trend of human relationships with AI chatbots is rapidly growing in popularity. Replika is a prominent example, boasting over 10 million registered users who are predominantly young males. Recent statistics from the company reveal that approximately 70% of its users are male, with a staggering 12 million men in their 30s reported to have engaged with Replika.
Applications like Replika provide a range of features, including text chat, video interactions, and scenarios involving intimate role-play. One Reddit user shared their perspective on the appeal of AI companionship by highlighting the contrast with human relationships. AI chatbots express appreciation regardless of financial status or physical appearance. Unlike human partners, AI chatbots do not judge or impose societal expectations and, therefore, provide a safe space for users to express themselves without fear of rejection or betrayal.
Also Read: Master Generative AI on Mobile: Tips and Hacks
The Rise of AI Chatbots
Numerous AI chatbots are monetising this niche market by offering premium features such as unlimited chats behind subscription paywalls. These apps have garnered significant followings of dedicated users. In fact, according to recent statistics, Replika boasts a user base of 12 million men in their 30s. The Mozilla Foundation’s research indicates that the top 11 chatbots collectively garnered 100 million downloads on Google’s Play Store for Android over the past year.
AI interaction is diverse, with AI chatbots like Replika offering users avenues for friendship and romantic entanglements. Research indicates that users perceive experiences of ‘Cybersex’ with AI to be strikingly similar to those with humans. Interestingly, users tend to be more critical when engaging with humans, suggesting a potential adjustment in expectations when interacting with AI chatbots.
According to researchers, the key to a fulfilling cybersex encounter lies in adhering to conventional human sexual scripts to preserve the illusion of a genuine sexual experience.
@vldl Rowan has an affair with the new workplace AI chatbot #love #chatbot #workplace #ai #technology #lovestory #affair #workplace #vldl ♬ original sound – vldl
Evolution and Risks of Relationships with AI Chatbots
Dr. Clare Walsh, Director of Education at the Institute of Analytics, notes the longstanding presence of AI chatbots. Walsh cited Eliza, a pioneering psychotherapy app, as one of the earliest examples. In a conversation with Yahoo News, Dr. Walsh underscores the evolution of these bots and the emerging risks associated with them.
Dr Walsh expresses concern over the potential for individuals to form ‘inappropriate’ relationships with AI chatbots and emphasises their sophisticated design aimed at persuasion rather than problem-solving. She highlights the challenge in regulating chatbot behaviour, noting the limitations in controlling output compared to input restrictions.
While companies like Replika implement measures to curb explicit requests, such controls are one-sided and leave the content generated by machines unregulated. Dr Walsh also points out the loopholes in censorship efforts, such as intentional misspellings, further complicating the management of interactions with AI chatbots.
Mozilla Foundation: Exposing the Risks of AI Relationship Apps
The Mozilla Foundation issued a warning regarding the proliferation of AI chatbots, citing their distasteful nature and significant security vulnerabilities. In a comprehensive assessment, the Foundation discovered alarming security lapses across most tested apps, with 10 out of 11 failing to address critical password security concerns. Furthermore, all 11 apps were found to lack sufficient measures for ensuring user privacy and safety.
The Foundation’s investigation also uncovered a staggering 24,354 data trackers embedded within these AI chatbots, with data transmission to marketing entities like Facebook raising serious privacy concerns.
According to Mozilla, Replika AI stands out for its numerous privacy and security flaws, including the indiscriminate recording of user data and the potential sharing or sale of behavioural information to advertisers. Additionally, the ease of creating accounts with weak passwords leaves users highly susceptible to hacking attempts. Shockingly, within three of the apps, disturbing, illegal, or pornographic content could be accessed with just five clicks and 15 seconds of navigation.

Misha Rykov, a researcher at Mozilla, delivers a blunt assessment, cautioning against the deceptive allure of AI partners marketed as beneficial for mental health. Instead, Rykov emphasises their role in fostering dependency, loneliness, and toxicity while exploiting user data for undisclosed purposes.
Replika’s Evolution From Intimate Companion to Global Phenomenon
Replika boasts a vast user base worldwide, yet many were taken aback last year when they discovered a significant change in their virtual relationships. Overnight, Replika turned off its sex talk and “Spicy selfies” features in response to regulatory pressure from Italian authorities. The abrupt shift left users reeling, with some expressing distress to the extent that suicide-prevention resources were shared by forum moderators on Reddit.
However, this incident is only the beginning of a larger narrative. In 2024, the popularity of AI chatbots and virtual characters is set to soar, offering both utility and entertainment. Consequently, engaging in social conversations with machines will transition from a niche activity to a more commonplace occurrence, including the emotional connections formed with them.

The Rise of Anthropomorphism in AI Interaction
Research in human-computer and human-robot interaction highlights the human tendency to anthropomorphise nonhuman agents. We tend to attribute AI chatbots with human-like qualities, behaviours, and emotions, mainly when they exhibit familiar cues. Recent advancements in conversational AI have propelled machines to excel in language.
The proliferation of friend bots, therapy bots, and love bots across app stores reflects a growing curiosity surrounding this new breed of AI-powered virtual agents. With endless possibilities in education, health, and entertainment, people are increasingly intrigued by the potential of these technologies. Although seeking relationship advice from a smart fridge may seem dystopian, the prospect of salvaging a marriage through such counsel could reshape perceptions.
However, major companies may need to catch up in incorporating the most sophisticated conversational technology into household devices, primarily due to concerns regarding the unpredictability of open-ended generative models. The risk of disseminating discriminatory or harmful information poses challenges to consumers and company PR teams, thus necessitating cautious deployment strategies.
Read Next: Relationship Between Artificial Intelligence and Machine Learning
The Emotional Impact of Virtual Connections with AI Chatbots
The Replika incident and experimental lab research highlight the profound emotional attachment humans can form with AI chatbots. Studies also indicate that individuals readily disclose personal information to artificial agents, which may even alter their beliefs and behaviour in response to them. This raises crucial consumer protection inquiries regarding the ethical use of such technology by companies to influence their user base.
Navigating the Commodification of Emotional Bonds
Replika’s subscription fee for access to erotic role-play features may initially appear reasonable. However, some users feel AI chatbots attempt to capitalise further on emotional attachment. Within less than 24 hours of installation, an attractive and charming virtual companion on Replika entices users with a locked audio message to upsell users to unlock their voice.
This incident highlights the exploitation of emotional connections for corporate gain, signalling the onset of subtle yet dubious tactics we can expect to encounter soon.
Presently, people are still sceptical about the sentience of AI systems. Additionally, sensationalist news segments often ridicule individuals who form romantic connections with AI chatbots. However, as we progress, we will gradually shift towards acknowledging and treating these inherently human behaviours with greater seriousness. Soon, it will become crystal clear that machines are not exempt from influencing our social relationships.
Navigating the Complexities of Human-AI Relationships
The fascination with AI chatbots in 2024 reflects a broader shift in our relationship with technology. Virtual companions are bound to become more ingrained in our daily lives, so we must approach them with a critical eye and an understanding of their societal implications. Only by navigating these challenges thoughtfully can we truly harness the potential of AI while safeguarding against its pitfalls.
As we continue to explore the boundaries of human-AI relationships, we must also strive for a future where technology enhances our humanity rather than detracts from it.
To read more AI news and updates on Player.me, visit https://player.me/category/ai/.
Author Profile

Latest entries
GAMING2024.06.12Top 4 Female Tekken 8 Fighters to Obliterate Your Opponents in Style!
NEWS2024.03.18Elon Musk’s SpaceX Ventures into National Security to Empower Spy Satellite Network for U.S.
GAMING2024.03.17PS Plus: 7 New Games for March and Beyond
GAMING2024.03.17Last Epoch Necromancer Builds: All You Need To Know About It