By Diarmaid Mac Mathúna, Director-Agency at indiepics
It’s getting harder and harder to spot fake images that have been generated by artificial intelligence systems. The images that the latest AI systems like Midjourney and DALL·E 2 are creating are very impressive. These photorealistic images are easily created using short, carefully engineered text prompts either directly or via tools like Microsoft’s Bing Image Creator.
The results are now so good that they’re winning international photography competitions like the Sony world photography awards and slipping through the editorial processes of publications such as the Irish Times. Even international organisations have started to use fake images instead of real or stock photos in their campaigns. This has led to controversial results for Amnesty International as they found out to their cost when they were forced to take down their recent fake AI generated protest photos.
I believe all communicators have an important role to play in helping people make informed and fact-based decisions – especially with elections on the horizon. AI-generated images are already flooding the digital world, and we need to help people spot fact from fiction.
But before we can help others recognise fake AI-generated images, we need to be able to identify them ourselves. Here are my top 3 tips for spotting fake AI images right now:
1. Too good to be true:
Many AI images just look too good to be true. They’re either too glossy or they’re too shiny to have really been shot by a photographer trying to grab a fleeting moment of action. They might even look like they’re perfectly studio lit when the setting wouldn’t have made that possible.
2. Distorted humans:
AI image generators are better at getting the number of fingers on human hands right compared to a few months ago, but faces and parts of the body can still look strangely warped and dysmorphic. Look closely at all the people in the photo (not just the main subject) and see if any of them look distorted.
3. General weirdness:
People don’t often get the text prompts they type into AI image creators exactly right. Or at least they’re not being interpreted correctly by the AI systems. That can lead to errors or mix-ups that a human can spot. So zoom in and look for weird things like country flags with the right colours but in the wrong order (which was the tell-tale sign of Amnesty’s recent fake protest images).
Once we learn to better spot fake AI images ourselves, we can spread the word and help other people identify more of them too. By helping people with this type of media literacy – and reporting suspicious content – we can play our part in helping to reduce the impact of fake images. While the purveyors of disinformation will try to make it impossible to distinguish between fact and fiction, we can still empower ourselves as humans to develop a healthy scepticism that helps us navigate through this increasingly AI generated media world.
Diarmaid Mac Mathúna is Director-Agency at indiepics. He has over 15 years of experience in strategic pan-European communications, integrated B2B marketing campaigns and creative television production. He also enjoys cycling, playing chess and coaching his daughter’s GAA team.