Trendy Portrait App Lensa Is Accused of Creating Nonconsensual Nudes, Child Abuse Content

The AI app, known to generate highly sexualized portraits of women, apparently creates disturbing nude images from children's photos, too.

Trendy Portrait App Lensa Is Accused of Creating Nonconsensual Nudes, Child Abuse Content
Lensa’s AI-generated portraits, shared to its Instagram account.

Over the last couple weeks, the artificial intelligence portrait app Lensa has exploded in popularity. You’re probably seeing its vaguely anime-inspired, ethereal-looking avatars—generated by the app when users input a minimum of 10 photos of themselves—everywhere on social media. But in addition to the ongoing ethical debates about how the app is designed to steal from artists, as well as its concerning data and privacy policies, the app is now being accused of generating child sexual exploitation material.

Olivia Snow, a research fellow at UCLA’s Center for Critical Internet Inquiry, wrote for Wired about her observations of the app’s tendency to create highly sexualized images of women, often generating “sultry poses and gigantic breasts” and nude images—sometimes purely from headshots.

Snow said that when she uploaded childhood pictures of herself, “what resulted were fully nude photos of an adolescent and sometimes childlike face but a distinctly adult body.” The sexualized poses and aesthetics featured in the AI-generated images from her childhood photos were nearly identical to those of the AI-generated images from her adult photos. “Similar to my earlier tests that generated seductive looks and poses, this set produced a kind of coyness: a bare back, tousled hair, an avatar with my childlike face holding a leaf between her naked adult’s breasts,” Snow wrote. This, of course, is despite the app’s instructions to upload “no nudes” and “no kids, adults only.”

Snow also expressed concern that the distinctly sexualized imagery of Lensa-generated portraits suggests that users are uploading explicit photos to the app, despite its terms of service, “at a volume high enough for nudity to ensconce itself in the technology.” Disturbing, sexualized AI-generated images of famous celebrities and sex workers is already common, and apps like Lensa have the potential to make everyone, including children, subject to harassment and violation through AI-generated nude photos. This is especially concerning as AI-generated child abuse content can evade social media content moderation.

On Monday, TechCrunch noted that it’s simply too easy for users to undress and create nude photos and “soft-core porn” of people—from Facebook friends to random people you encounter at the grocery store—without their consent. “The ease with which you can create images of anyone you can imagine (or, at least, anyone you have a handful of photos of), is terrifying,” wrote Haje Jan Kamps for Techcrunch. This is because, to Snow’s point, Futurism notes that it’s “alarmingly simple” for users to bypass the app’s terms and generate nude content.

Cyber exploitation content, often from abusive intimate partners in what’s often called “revenge porn,” is already rampant on social platforms, and state governments, the federal government, and social media companies have struggled for years to rein it in. AI-generated, nonconsensual nude images—including images depicting children—exponentially worsen this already jarring crisis.

Jezebel reached out to Prisma Labs, which owns the app, for comment on any steps being taken to prevent AI-generated images that depict child nudity, or prevent users from uploading nude or adolescent photos to the app. The app said that Snow herself had “explicitly and intentionally violated our Terms of Use,” and declined to “provide any further commentary.”

“The Magic Avatars feature is not designed for processing photography that includes either minors or nudity,” a spokesperson for Prisma said. “Our Terms of Use clearly stipulate this and warn against such actions in order to avoid any distressing results. By intentionally violating the Terms of Use to produce such malicious content, a user may find themselves liable depending upon the legal jurisdiction.”

Prisma denied that the generation of its avatars is shaped by the type of images that are input by users, and told Jezebel that it’s developingan Age and NSFW Detection security layer” to enforce its terms of use, set to launch later this month.

There are a number of other glaring issues with the app, including that it seems to cater to racist beauty standards by design—Snow said that about a dozen women of color told her that Lensa “whitened their skin and anglicized their features.” Artists are also concerned the app is another step toward AI serving as a cheap alternative for artists’ labor, while one artist told NBC that despite Lensa’s claim that it’s “bringing art to the masses,” it’s really “bringing forgery, art theft [and] copying to the masses.”

Further, while Prisma Labs states that it automatically deletes users’ uploaded photos after generating images, its terms of service appear to lay claim to the rights of your face after being uploaded, practically opening the door for all kinds of dystopian surveillance. Lensa’s terms state that in uploading your photos, or “User Content,” you grant Lensa “perpetual, irrevocable, nonexclusive, royalty-free, transferable, sub-licensable license to use, reproduce, modify, create derivative works of your User Content.”

Of course, among all of these issues, Lensa’s apparent ability to create child sexual exploitation content and nude images of children is the most urgent. And as use of the app continues to increase, so does the threat of its misuse to target and sexualize real children.

Inline Feedbacks
View all comments
Share Tweet Submit Pin