Scammy AI
Remember Charlie and the Chocolate Factory? That cute children’s story where some lucky kids get to tour Willy Wonka’s secret factory. This story, written by Raold Dahl in 1964, has had three movie interpretations over the years, each of them very different from the other.
But one thing that each movie does have in common is their colorful sets. In the original story, each room in the factory is described in rich, beautiful detail, and the movie versions did their best to interpret them. No matter which movie is your favorite, you can’t fault the amazing sets, and the sheer magic of these imaginary worlds with their Oompa Loompas and chocolate rivers and trees made of candy.
So when parents in Glasgow, Scotland saw an ad for a Willy Wonka-inspired Chocolate Experience in February of 2024, a local thing they could bring their kids to, many jumped at the chance to go. The pictures in the ad were chock-full of amazing landscapes with bright colors and lollipops, a magical, colorful land with a level of detail you’d only expect to find in places like Disney World. Except it was right there in a warehouse in Glasgow, for just a few bucks! The equivalent of $45 in US dollars. Way cheaper than Disney World, and right around the corner. I can just imagine these parents picturing the look of delight on their kids’ faces when they got to experience this [sing] “world of pure imagination.” Oh, what a wonderful day it would be for the kiddos!
There was a line around the block to get in. But when these families entered, they got… a bare warehouse with a few pictures hung on the wall, a small candy cane sculpture thing, a bouncy castle, and a few actors in costume who tried their best despite knowing the experience was lame, lame, lame.
It was a disaster. Angry parents demanded their money back, and a spokesperson for House of Illumnati, the company that promoted the event, apologized profusely.
It turns out that they had used AI, artificial intelligence, to generate images to promote the event, not actual pictures of what it looked like. And the difference between the AI images and reality was huge. Like, not even from the same planet-huge.
On top of that, they gave the actors these AI-generated scripts that made no sense at all. For example, the script called for this grim reaper kind of character to jump out and scare the kids. I don’t remember that being in the original Willy Wonka story.
The whole event was a big fat fail. And a perfect example of how not to use AI. In retrospect, those promotional images were clearly AI-generated, but not everyone knows how to spot them.
There’s been a lot of buzz lately about the use of AI to generate art… not just pictures, but videos, music, and even scripts for plays and movies.
Some people think AI-generated content is a wonderful thing because it’s so easy and convenient, but a lot of people hate it, for a whole bunch of different reasons. In this episode we’re going to take a look at AI-generated art–why people use it, why some want to outlaw it, and how to spot it so you don’t get duped.
So strap yourself in for a ride through ethical conundrums, artist lawsuits, six-fingered presidents, and everything in between.
The concept behind AI-generated content is pretty genius. A program runs through thousands of examples of something, whether it’s paintings and drawings or music, videos, or written material, and gathers information about them. For example, the program might go through all the paintings of Vincent Van Gogh and gather information about the subject matter, the style, and the colors used. Then the program, or algorithm, goes through what’s called training. The algorithm produces art based on the information it has, and those results are rated by humans as good or bad. The algorithm takes these ratings and makes more art, which is rated again. After many iterations, the algorithm starts to figure out the patterns that get a “good” rating, and after that, it can make pretty good art all on its own. When it’s in good shape like that, it becomes what they call an AI model. If you’ve used an AI program online, it’s already a model, already trained.
AI services work off of text prompts. You type in “image of a garden by a lake, in the style of Monet,” hit the GO button, and a few minutes later you get a lovely impressionist picture generated on the fly. Hit the GO button again, and you get a completely new picture in the same style. Each one is an original, and none is a copy of any existing painting. You can do the same to generate a piece of music, or a screenplay for a film, or an essay for school, or an email to send your ex about child visitation. Whatever you want.
And most of these AI services state that whatever it generates from the prompt belongs to you. It’s yours, and no one else’s. This is a bit of a sticky point, which I’ll get back to later.
I’ve tried out AI art generation with two of the more popular models, Midjourney and DALL-E, and it can become addictive. It all started when I needed a tapestry of a medieval battle for one of my short films, and I couldn’t find anything online. AI to the rescue! and we got our tapestry pictures. I’ve also used it to make a colorful cover for a report for work, and a bunch of other small projects.
The surge of AI in the past few years is largely due to leaps in technology. Ten years ago, we didn’t have the computing power that AI requires, and now we do. And OpenAI, one of the leading organizations in the field, kicked things off in 2020 with the release of ChatGPT, where you could type in a text prompt like “Who was Abraham Lincoln?” and get a response worthy of a term paper. Just four years later, AI is everywhere, and everyone can get it.
Of all the arts, images are in the lead for quality. AI images are cropping up everywhere: book covers, logos, posters, look books for film pitches, and of course, all over the internet.
But is this really a good thing? It depends who you ask. Now that anyone can create art with a few words and a click of a button, a big concern amongst artists is that they’re going to lose paying work. Many are annoyed that their art is being used to train AI models without their permission. And still others, like the parents in Glasgow, are irritated that AI was used to fool them.
In a minute, we’ll look at each of these concerns. But for now, let’s take a little dip back into the past.
The concept of computer-generated art is not entirely new. As early as the 1960s, artists like George Nees, Frieder Nake, and Manfred Mohr were writing algorithms to generate art, just to experiment with technology and push the boundaries. But there’s a big difference between what those artistic pioneers were doing, and what AI is doing now.
The early programs relied on what the individual artists programmed them to do, without relying on any other input. Today’s AI is trained on thousands of pieces of art, stuff produced by other people, real live artists.
Some of today’s AI programs even scrape the internet for input, which means going around to publicly visible sources and grabbing up whatever it can find, like pictures, videos, stories, scripts, you name it.
[]
This brings us to the first troubling fact about AI art–that it’s often built on the backs of other people’s art, without their permission. And this is where it really pisses people off.
In January 2023, a group of artists filed a lawsuit against Stability AI, a company that provides the AI technology that many AI services use, saying their artwork was used to train the AI models without their permission. The case is ongoing, with a judge ruling in August 2024 that the case can go forward.
Writers have the same kinds of concerns, but for different reasons. Like, suppose you write a screenplay and post it on your website. It’s copyrighted, so you don’t worry about someone stealing it, But then AI comes along and scrapes it, because it’s publicly available. A few days later, some guy fires up his favorite AI app and types in a prompt to write a similar screenplay. Let’s call this guy, George. And the AI uses what it learned from your screenplay to write an entirely new work, one that now belongs entirely to George.
Some would argue that it’s no different from George reading your screenplay and using it for inspiration. And they have a point. AI isn’t any more capable than the people who wrote it, but it does work a lot faster and can recognize patterns in a bunch of artwork way faster than you or I ever will.
But AI is never original. It can’t think in the way an artist does. But still, in the case of drawings or paintings, this is a huge problem, since the artwork that AI generates is often as good as anything I’ve seen from some real live artists. With writing, though, it’s a bigger problem. Because AI is not original and frankly, its writing sucks.
As an example, my film team and I did an experiment a few months ago where we tried to generate a script for a short film with AI. We figured AI would come up with something strange and outrageous, and we could have fun making the film and showing it to people as an example of AI’s kookiness. But what it came up with, over and over, was straight-up boring and idiotic. The story was dull, and the dialogue was what you’d expect from an eight-year-old. We scrapped that idea pretty quick. So much for AI taking over the filmmaking business.
At the same time, I hear that screenwriters are concerned that studios will use AI to generate scripts rather than asking for original work, or worse, that they’ll ask real writers to clean up terrible AI-generated scripts. Maybe this will become a problem, I don’t know. If it costs the same to get it cleaned up as to buy something that actually came out of a writer’s creative brain, maybe this won’t ever be a thing.
Another concern in the film business is that studios will just go ahead and produce these terrible scripts as-is, which means they won’t be buying original work as much. I think this will be a self-correcting problem. After a studio spends millions to produce one of these things and no one wants to see it, they’ll see the light. At least we can hope so.
Back to the problem of AI-generated art. Is it really as good as original art? How do you recognize that something was generated with AI?
AI images have a certain look to them. First of all, AI likes to use a lot of blue and orange, so that’s one clue. And things look too clean, like a photo that’s been retouched too much.
Another clue is what I call the six-finger problem. AI doesn’t always do a good job with protrusions like fingers and arms and legs. If there’s something strange going on with the hands and limbs in the picture, there’s a good chance it’s AI-generated.
AI-generated video is still in its infancy. There are a few platforms around that will animate something a little bit this way or that, almost like panning a picture. Or it can put together a slideshow type of thing, pretty dull stuff. But there’s nothing yet that can produce a convincing full-blown video from a text prompt.
I’m not talking about a video generated directly from an existing video, like when someone swaps out a face or takes a video of a speech and makes someone appear to say words they never said. Those types of videos, called deep fakes, have been around for a while. But decent videos from scratch, from a text prompt, are still not possible.
In early 2024 we saw demonstrations of Sora, an amazing tool from OpenAI for creating videos out of thin air, but it’s not ready yet. You know all those problems with arms and legs in images? Multiply those by a thousand and you’ll understand the challenge that realistic videos face. The Sora website shows some pretty interesting demonstrations of this issue, like a woman walking through Tokyo, where the scenery is perfect but her legs, well, they swap places every once in a while in a way that could make squeamish people nauseous. And there’s a dalmatian puppy that defies the laws of physics, a birthday party where everyone has big smiles while awful things happen to their hands, and prancing wolf cubs that mysteriously divide and multiply like big cute fluffy amoebas. I find these videos pretty awesome to look at from a technical perspective, but I advise you to stay away if you’re the kind to get nightmares from them.
My point is that video generated from scratch, using just text prompts alone, is still a ways off. However, there are some companies making good use of existing video clips to make new videos. A company called Invideo.ai uses stock video and photography to generate short videos for creators. The company pays for the rights to use the stock images, so it’s on the up-and-up. You type in a prompt, and it uses paid-for images and video to generate a video for you. This is, in my opinion, an ethical use of AI, where the creators or the original imagery are at least getting something out of it.
This type of usage kind of addresses another problem that artists have with AI–the fact that they’re losing business. It’s just plain cheaper for a company to generate a bunch of AI images than to hire a live artist to draw them.
I’m not sure where I stand on this question. On one hand, in every case where I’ve used an AI-generated image, it wasn’t like, “Oh, ordinarily I would pay an artist a thousand dollars to make this image, and now I’m going to save that money by using AI! Mwahahaha!” It was more like, hmm, it would be nice to have an image here, so instead of grabbing something off a royalty-free website, I’m going to have a little fun and create something with AI. No artists have lost work because of my choices. And most of the people I know who use AI images, are in the same boat. It’s not like we were going to pay for it to begin with. The use of AI just saved us hours of scouring the web for something we could legally use.
Another perspective is that visual artists went through a similar crisis when stock photos and video started to become widely available. Instead of hiring a photographer and models for a day, a place like an advertising agency, could save big bucks by using stock photographs or videos instead.
And well before that, new technology was constantly making artists upset. TV was supposed to kill movies, and photography was supposed to kill painting. None of these things happened. There will always be a need for real live photographers, cinematographers, painters, and other visual artists, and even home streaming services haven’t replaced the experience of watching a movie on the big screen in a giant theater with dozens of other people, all cheering or gasping or laughing at the same scene while you balance a big bucket of overpriced buttery popcorn on your lap. There’s room for all these artistic experiences in our lives.
My point is that technology shifts and changes every few years, with art forms experiencing more or less popularity for one reason or another, and I don’t think we can really blame AI for that.
On the other hand, I do agree that using copyrighted works to train an AI model without the artist’s permission, and without compensating them, is kinda skeevy. Hopefully we’ll see more AI services turning to ethical models, where artists are compensated if their work is used to train an AI model.
So now, let’s talk about how to spot AI images and text. This can be tricky, but I’ll try to explain.
With text, there’s just a certain feel to it, a certain disconnect that makes you feel uneasy. It’s like a super polished salesperson spouting all the things you want to hear, but with really bad corny jokes and puns. If the person creating the AI text doesn’t edit what they get before sending it to you, it just sounds mechanical from a grammatical standpoint, and weird.
As for images, if you play with an AI service for a few hours, you start to get a feel for the types of images it produces. Lots of blue and orange, everything is too clean, and there’s often someone with an extra elbow or the wrong number of fingers.
But there are AI services that will produce a variation on a photo, which is where things get alarming.
What’s alarming is that scammers have started to use AI to avoid detection. Here’s how they do it.
Before AI, if you got a text from an extremely attractive person wanting to be your boyfriend or girlfriend, you could do a reverse image search on any pictures they sent you, and find the original. And you would find out pretty quick that it’s a photo from some other person’s Instagram, someone whose name doesn’t match the texts you’re getting. Or if it was a celebrity, you could find those original images in that person’s feed, and realize that the scammer had just copied the images. This would give you a clue that it’s a scam, that the person was trying to rope you into an online relationship so they could start asking for money.
But with AI, a scammer can create pictures that are sort of like the original, but different enough that they either won’t come up in a reverse image search, or that you might start to believe they’re sending you real pictures, live and in real time.
Just like the parents in Glasgow who thought they were getting a unique Willy Wonka experience. Just like all the unfortunate men and women who think they’re in an online relationship with an attractive stranger, or with Johnny Depp or Taylor Swift, who for some reason can’t get access to their millions and need you to send them money. Just like any of the countless people who have fallen for scams because the image just looked so real.
This, THIS is why I named this episode: Scammy AI. Not because all AI is scammy, but scammers using AI to scam people is always scammy.
Anything good in life will eventually be exploited by scammers. And that includes AI.
But you can do your part by knowing what AI does, and how you can spot scammers who are using AI images to try and get something from you.
You can have some fun with this. If you’re randomly sent a photo of an extremely attractive person, ask for a photo of them with a spoon sitting on their head, or of them holding up a sign that says “I love donuts” or some other phrase. Then rejoice silently while they either try to create the image with AI, or give up and stop bothering you.
I hope you’ve enjoyed listening to this episode as much as I’ve enjoyed making it. I strongly encourage you to try out some of the free AI software out there, and see for yourself what it’s capable of. Having some familiarity with it will make you more aware of what it looks like, so you can spot it when you see it.
I’m not one of those people who thinks AI will eventually revolt against us, that my Alexa is going to suddenly grow a brain. I think the bigger threat is criminals’ use of AI to manipulate people into thinking something is true when it’s not.
So go forth safely into the world, and use AI responsibly.