How can one differentiate an AI-generated image from the real deal? Pope Francis in an oversized white down jacket in 2023 was a watershed moment: never had an AI-generated image been so lifelike and viral. It may have been the Moncler/Balenciaga-like styling of the deepfake, or the quality of Midjourney. But in the three years since, something has changed, and distinguishing the real from the fake has become more complex.
On the one hand, the latest Midjourney v7 version has raised the bar even higher (v8 has also recently become available), and then there are just as many good alternatives such as Flux.2 or DALL-E 4. So, is there any way to detect fakes? In most cases yes, but only if you pay attention to some details.
Artificial intelligence often has difficulty reproducing human anatomical details. It increases the number of fingers, merges them, even makes them appear alien-like. The same is the case when it comes to the symmetry of the eyes, the uniformity of teeth, the volume of hair: occasionally, everything looks unnatural. Even the skin sometimes looks abnormal. The first step, therefore, to recognise an AI-generated image is zooming in: zealously observing details to hunt for anomalies.
Many AI-created fakes show inconsistent or totally absent shadows. Sometimes, this also includes impossible reflections on glass or skin – as though laws of physics were not taken into account. For example, in the 2023 deepfake of the Pope, the cross seems to float above the neck without casting a shadow. Often, some backgrounds appear to be too perfect, with manifestly artificial symmetries. Recycling of patterns or their replication also happens frequently in AI-generated images.
Recent years have seen the release of an increasing number of free online tools to help analyse an image and determine, by assigning a percentage value, whether its creators are human or AI. These detectors work by using deep learning algorithms trained to recognise patterns that are hallmarks of major generators, such as Midjourney, DALL-E, Stable Diffusion, Flux and others. While these are not foolproof tools, they can be of great help when paired with a manual visual analysis and an assessment of the diffusion context, the source – if any – and other details. Flux.2 and Midjourney v7, though, are proving to be a challenge for just about everyone.
Here are the most commonly used online tools:
- Isgen.ai is completely free, with no limit of scans and also identifies the model used; the interface is basic.
- ZeroGpt Image Detector supports many formats but has difficulty with Flux.2, is multilingual and requires no registration.
- Undetectable.ai supports many formats – even the more recent ones – but some more advanced tests require payment.
- Arting.ai is very fast and free, but there is little public data when it comes to accuracy.
- Hugging Face is open source, used by researchers and developers, but requires technical expertise and is less intuitive than other similar tools.
- FotoForensics is a professional tool for forensic pixel analysis, but it is difficult to interpret without technical expertise.
The next step, after a visual analysis by zooming in and using a detector, is the metadata check. Every digital image comes with a set of hidden data based on the Exif (Exchangeable Image File Format) standard. This is a kind of virtual baggage that contains information about the photo: from camera model, exposure time, times and sometimes even GPS coordinates. An AI-generated image is devoid of such data – in fact, sometimes it even comes with an AI-generated tag or the name of the relevant generating software. However, the absence of metadata is not definitive proof of AI because major social platforms automatically remove Exif data once an image is uploaded to protect user privacy.
At this point, to read the metadata, all that is needed is one of the many free tools on the market. Prominent among them is the super-technical ExifTool, used mostly in forensics. On Windows right click on the photo, then select Properties, followed by Details. On a Mac, once on the photo click on the space-bar, go into Finder, select File and finally Get Info.
Lastly, and most importantly, there is a final element to think about: context. Examine who posted the image, when it was posted and check if the relevant social account is new. When faced with controversial images of public figures, ask yourself whether the image might cause polarised reactions – perhaps even political ones. Also useful is the so-called reverse search: checking via Google Lens whether a seemingly exclusive image has already been published by dozens of different sites on the same day, or whether it featured on an anonymous site. In either case it could be a fake.
The golden rule is that it is possible to get the right answer if you combine and look into these different aspects while investigating the origin of an image. Tedious? Yes, and that is what fraudsters count on.
This story first appeared on WIRED Italia and has been translated from the original.

