All of the 400 exposed AI systems found by UpGuard have one thing in common: They use the open source AI framework called llama.cpp. This software allows people to relatively easily deploy open source AI models on their own systems or servers. However, if it is not set up properly, it can inadvertently expose prompts that are being sent. As companies and organizations of all sizes deploy AI, properly configuring the systems and infrastructure being used is crucial to prevent leaks.

Rapid improvements to generative AI over the past three years have led to an explosion in AI companions and systems that appear more “human.” For instance, Meta has experimented with AI characters that people can chat with on WhatsApp, Instagram, and Messenger. Generally, companion websites and apps allow people to have free-flowing conversations with AI characters—portraying characters with customizable personalities or as public figures such as celebrities.

People have found friendship and support from their conversations with AI—and not all of them encourage romantic or sexual scenarios. Perhaps unsurprisingly, though, people have fallen in love with their AI characters, and dozens of AI girlfriend and boyfriend services have popped up in recent years.

Claire Boine, a postdoctoral research fellow at the Washington University School of Law and affiliate of the Cordell Institute, says millions of people, including adults and adolescents, are using general AI companion apps. “We do know that many people develop some emotional bond with the chatbots,” says Boine, who has published research on the subject. “People being emotionally bonded with their AI companions, for instance, make them more likely to disclose personal or intimate information.”

However, Boine says, there is often a power imbalance in becoming emotionally attached to an AI created by a corporate entity. “Sometimes people engage with those chats in the first place to develop that type of relationship,” Boine says. “But then I feel like once they’ve developed it, they can’t really opt out that easily.”

As the AI companion industry has grown, some of these services lack content moderation and other controls. Character AI, which is backed by Google, is being sued after a teenager from Florida died by suicide after allegedly becoming obsessed with one of its chatbots. (Character AI has increased its safety tools over time.) Separately, users of the generative AI tool Replika were upended when the company made changes to its personalities.

Aside from individual companions, there are also role-playing and fantasy companion services—each with thousands of personas people can speak with—that place the user as a character in a scenario. Some of these can be highly sexualized and provide NSFW chats. They can use anime characters, some of which appear young, with some sites claiming they allow “uncensored” conversations.

“We stress test these things and continue to be very surprised by what these platforms are allowed to say and do with seemingly no regulation or limitation,” says Adam Dodge, the founder of Endtab (Ending Technology-Enabled Abuse). “This is not even remotely on people’s radar yet.” Dodge says these technologies are opening up a new era of online pornography, which can in turn introduce new societal problems as the technology continues to mature and improve. “Passive users are now active participants with unprecedented control over the digital bodies and likenesses of women and girls,” he says of some sites.

Share.
Exit mobile version