From Trump Nevermind Babies to Deep Fakes: DALL-E and the Ethics of AI Art | Artificial Intelligence (AI)
Want to see a picture of Jesus Christ laughing at a meme on his phone, Donald Trump as the Nevermind Baby, or Karl Marx slimming down at the Nikelodeon Kid’s Choice Awards?
If you’ve been on Twitter or Instagram for the past two weeks, it’s been hard to miss some weird formulations of these kinds of scenarios in the form of AI art.
DALL-E (and DALL-E mini), the creator of these works of art, is a neural network that can take a text sentence and turn it into an image. He was trained by looking at millions of images on the internet with accompanying text, and he learned how to create images of things you would never expect to come together, like an avocado chair.
Text-to-image technology is advancing at a rapid pace, and the full DALL-E model is capable of producing startlingly sharp images depending on the input you provide, while the mini version is still pretty clunky to capture the weird internet style that makes them instantly memable. The best examples of this can be found on the r/weirddalle subreddit.
But experts say the technology poses ethical challenges.
Prof Toby Walsh, artificial intelligence researcher and author of a book on the morality of AI, says the kind of technology that powers DALL-E makes it easy to create fake images.
“We see deep shadows being used all the time, and the technology is going to be able to synthesize still images, but eventually also video images. [more easily] by bad actors,” he says.
DALL-E has content policy rules in place that prohibit bullying, harassment, creating sexual or political content, or creating images of people without their consent. And although Open AI has limited the number of people who can sign up for DALL-E, its lower quality replica, DALL-E mini, is open-source, meaning people can produce whatever they want. want.
“It’s going to be very difficult to make sure people don’t use them to create images that people find offensive,” Walsh says.
Dr Oliver Brown, a computational creativity researcher at the University of New South Wales, explains that the nature of AI neural networks makes it difficult to stop DALL-E from creating offensive images, but it is possible to prevent the person requesting the image from accessing and sharing it.
“You can obviously just have a filter at the end that sort of tries to filter out things that are bad.”
Walsh says that in addition to the regulatory framework and company policies regarding the use of technology, the public also needs to be educated to be more discerning about what they see online.
“If I have [an image] on the BBC website, the Guardian website, hopefully they’ve done their homework and I could be a little more confident than if I got it on Twitter. [In that case] I ask all the questions about [whether this is] a bit of fake content or not.
The other major ethical issue Walsh sees coming is the potential for text-to-image AI to replace graphic design jobs.
“You can imagine that more of us will be able to do graphic design because we could say ‘draw me a picture’ with the specs whenever we want, and we’ll get that picture. Whereas before, there was a graphic designer who produced this image,” he says.
“Graphic design isn’t going away, it will lead to even more graphic design because we can all access these tools, but graphic designers might have less work to do themselves.”
But Brown says this new technology will also enable “rapid creativity”, meaning the thought that goes into the image request will lead to more creativity.
“This new challenge is for creatives to think about what they want to put into a system like this,” he says.
The clunky look of generations of DALL-E mini-images is also becoming an Internet art form in its own right, says Brown.
“I can imagine that would be just huge for something like Instagram or just direct messaging with your friends when you’re trying to send memes.
“There will be all kinds of crazy image-generating subcultures. So if it produces these kind of blurry, slightly mangled images with people’s arms in the wrong places, that’s okay, we’re just getting used to that aesthetic.