The Lensa AI app generated hypersexualized avatars for me without my consent

When I decided to try the new app that’s all the rage right now, dubbed Lensa AI, I was hoping to get similar results to some of my colleagues at MIT Technology Review. This image-editing app was launched in 2018 but has recently grown dramatically in popularity with the addition of Magic Avatars, an AI-powered feature that generates digital portraits from your selfies.

However, I was quite surprised with the result: while Lensa AI generated realistic and flattering avatars for them – depicting them, for example, as astronauts, fierce warriors or even with the same style as the cool album covers of electro music – me, I got tons of nudes. Of the 100 avatars the app generated for me, 16 were topless and another 14 showed me wearing extremely short outfits and adopting clearly sexualized poses.

I’m an Asian female and I feel like that’s the only thing the AI ​​model picked up from my selfies. The images I got were clearly modeled after characters from anime or video games or even porn given the significant portion of my avatars that were naked or scantily clad. Some of my avatars seemed to be crying. A white colleague of mine got far less sexualized images with just a few nudes and bras showing.

MELISSA HEIKKILÄ VIA LENSA

AI data is filled with racist stereotypes, pornography and explicit rape images

Lensa AI’s fetish for Asian women is so strong that I got female nudes and suggestive poses even when I asked the app to generate avatars depicting me as a man.

It’s no surprise that I’m getting such hypersexualized results, says Aylin Caliskan, an assistant professor at the University of Washington who studies bias and representation in artificial intelligence systems.

Lensa AI creates avatars using Stable Spread, an open-source AI model that generates images from text instructions. Stable Diffusion is built from LAION-5B, a massive open source dataset that was compiled by scanning images from the internet.

And since the internet is full of photos of naked or scantily clad women as well as images that reflect sexist and racist stereotypes, the dataset is geared towards these types of clichés as well.

MELISSA HEIKKILÄ VIA LENSA

According to Aylin Caliskan, this leads to AI models that sexualize women whether they want to or not. And this applies in particular to women from ethnic minorities.

AI training data is filled with racist stereotypes, pornography and explicit rape images. This was observed by researchers Abeba Birhane, Vinay Uday Prabhu and Emmanuel Kahembwe after having analyzed a set of data similar to the one used to build Stable Diffusion. Interestingly, such results were not obtained only because the LAION dataset is open source. Most other popular imaging AIs, like Imagen from Google or DALL-E from OpenAI, are not open source but are built using similar data. Conclusion: This suggests that this is a sectoral problem.

As i reported it last septemberwhen the first version of Stable Diffusion had just launched, searching for keywords such as “asian” in the model’s dataset returned almost exclusively to porn.

Lensa AI would not use a security filter

Stability.AI, the company that developed Stable Diffusion, launched a new version of the model at the end of November. According to a spokesperson, the original model was released with a security filter. The latter does not seem to be used by Lensa AI because otherwise it would delete these results. One of the ways Stable Diffusion version 2.0 filters content is to remove images that come up often. Usually, the more an item is repeated, such as Asian women in sexually explicit scenes, the stronger the association becomes in the AI ​​model.

Aylin Caliskan has studied CLIP (Contrastive Language Image Pretraining), a system that helps Stable Diffusion generate images. CLIP learns to match images in a data set to descriptive text instructions. Aylin Caliskan found it to be filled with very problematic gender and racial biases.

“Women are associated with sexual content while men are associated with professional content in various fields such as medicine, science, business, etc.,” says Aylin Caliskan.

Funny how my avatars generated through Lensa AI looked more realistic when my photos went through male content filters. I got avatars of me wearing clothes! And in neutral poses. In several images, I was wearing a white coat that could belong to a doctor or a chef.


READ ALSO

Pressure, burnout… The quest for responsible AI is at the expense of experts

Moreover, the training data is not the only one to be in question. “Companies developing these models and applications are making active choices about how they use the data,” said Ryan Steed, a PhD student at Carnegie Mellon University, who has studied biases in image-generating algorithms.

“Somebody has to choose the training data, decide how the model is built, decide whether or not to take steps to mitigate those biases,” he continues.

“Here, the app developers have chosen that the male avatars will appear with spacesuits while the female avatars will have cosmic thongs and fairy wings.”

Some might use Lensa AI to generate pornographic or pedophile photos

According to a spokesperson for Prisma Labs, which publishes the application, the “sporadic sexualization” of photos affects all people, both women and men, but in different ways.

The company claims that because Stable Diffusion is trained on unfiltered data from the Internet, neither it nor Stability.AI, the company behind Stable Diffusion, “could not consciously apply representational biases or intentionally embed conventional elements of beauty”.

“Unfiltered and human-created online data has integrated the model with the existing biases in our societies”, details the spokesperson.

Despite this, the company assures that it is increasing its efforts to solve the problem.

In a post on his blog, Prisma Labs claims to have tailored the association between certain words and images to reduce bias. The spokesperson did not provide further details on this point. Stable Diffusion also made it more difficult to generate graphical content and the creators of the LAION database introduced NSFW filters (“not safe for work”, an expression used to designate content that should not be viewed in public , note).

Lensa AI is the first viral application developed from Stable Diffusion and it will not be the last. At first glance, Lensa AI may seem fun and safe, but there’s nothing stopping people from using it to generate images of naked women without their consent, from photos posted on their social networks. Others might use the app to create pictures of naked children. “The stereotypes and biases that Lensa AI helps to maintain can also be extremely detrimental to how women and girls view themselves or others view them,” Aylin Caliskan says.

“We are currently in the process of building the legacy of our society and our culture for generations to come. In 1,000 years, when we look back, is this the look on women that we want to leave them ?” she concludes.

Article by Melissa Heikkilä, translated from English by Kozi Pastakia.


READ ALSO

European Union: companies held liable for damage caused by AI?

The Lensa AI app generated hypersexualized avatars for me without my consent