Guest post by Judith Townend
The fascinating field of digital forensics couldn’t be more relevant for a journalist, researcher or editor. Specialised analysts can test the authenticity of a photograph: how many times has it been saved? Have additions been made to the original, and if so, in which order? Are parts of the image generated by a computer?
Professor Hany Farid, based at the Department of Computer Science at Dartmouth College in the US, explains that photo forensics refers to mathematical and computational techniques “that seek to determine if a photo has been altered from its time of recording, and how/where the photo was altered.
“This field of study is highly technical, and while some simple analysis can be done without much technical training, most of the forensic techniques require a highly skilled practitioner.”
What sort of things should we be looking for? “Lighting and shadows can be a powerful clue to reveal tampering,” says Professor Farid. “However, great care needs to be taken, as the visual system can often be quite unreliable in judging lighting and shadows in an image.” (See, for example, his co-authored paper on ‘Image Forensic Analyses that Elude the Human Visual System’, Farid and Bravo 2010).
“JPEG compression artifacts are often confounded with tampering, so it is important to understand what these compression artifacts look like, so as not to confuse them with tampering artifacts.”
Professor Anthony TS Ho, from the Department of Computing at the University of Surrey, agrees that this makes analysis difficult. His research on image authentication examines watermarking and how it can survive manipulations such as JPEG compression.
Information, including copyright, source and time stamp, can be embedded in digital material. Different parts of the image can be watermarked in order to detect changes, such as altered digits on car number plates.
(Image: The Plate Market on Flickr)
Researchers in Professor Ho’s multimedia security and forensics group are also experimenting with applying mathematical algorithms based on natural laws, such as Zipf’s Law and Benford’s Law, in image forensics.
Professor Ho warns that some tools may only detect changes in a high-level quality image – those saved around Quality Factor (QF) 90 to 95. Images saved in Photoshop will be at QF=75 by default. Ongoing research will try and find ways to increase accuracy when analysing lower quality images.
So what tools can be used? Standard EXIF readers can be useful for examining an image’s metadata, says Professor Farid. “This can be useful in determining if a digital image was modified from its recording since most photo-editing software alter the metadata.”
But, it is also the case, he adds, “that the metadata can be re-edited to conceal these changes, so care should be taken in making very strong conclusions based on an image’s metadata.”
Online tools and non-academic information are scarce. One site, ‘Error Level Analysis’, claims that it can help determine whether a photograph has been manipulated from the original. Its premise is simple: paste the URL for a image file in the box and ELA will indicate how the photograph has been treated.
The ELA site’s creator explains on the site:
“It works by resaving an image at a known quality, and comparing that to the original image. As a jpeg image is resaved over and over again, its image quality decreases. When we resave an image and compare it to the original, we can guess just how many times the image has been resaved. If an image has not been manipulated, all parts of the image should have been saved an equal amount of times. If parts of the image are from different source files, they may have been saved a number of different times, and thus they will stand out as a different colour in the ELA test.”
The key is to look for different levels of brightness in the photo. We did a quick test for ourselves with a simple photograph. This is how the original looked when put through ELA. The top picture is the original, the bottom shows the image under ELA.
The we drew an arrow and circle on the image. See how it stands out brightly when put through ELA.
The tool is based on Dr Neal Krawetz’s image forensic work. Dr Krawetz is quoted on the site:
“Error level analysis (ELA) works by intentionally resaving the image at a known error rate, such as 95%, and then computing the difference between the images. If there is virtually no change, then the cell has reached its local minima for error at that quality level. However, if there is a large amount of change, then the pixels are not at their local minima and are effectively original.”
As image analysis tools are developed, it would be sensible for editors and reporters, as well as researchers, to keep track of them. It might help avoid disastrous mistakes, such as the time Reuters ran a doctored image of an Israeli air strike in Lebanon. But they should take care when using a site like ELA. Firstly, there is limited peer review available. Secondly, the final image on ELA comes with a fairly important disclaimer (our emphasis):
“It is worth noting that edges and areas red in colour are often depicted as brighter in the ELA tests. This due to the way the photos are saved by various programs. It is not proof that image was manipulated. If you are unsure how to interpret the results, please do not claim the results of this tool as proof of anything.“
Even when the image shows different levels of brightness, the final results are tricky to analyse. As users in this Flickr forum discussed, a photograph saved to Flickr may show different results from the original file, even if the picture itself had not been tampered with. ‘Muzzlehatch’ speculates this is because of the Flickr compression process.
Compare and contrast this photo we tested. This shows the same unaltered image we used earlier, but this time as a jpg file uploaded to Flickr.
ELA may not be a solution for detecting manipulated photographs but it provides an important reminder about the need for a little detective work before accepting an image is what it appears to be.
While we still need experts for thorough analysis, don’t forget that basic observation can be used to detect suspicious things too.
1. Look out for duplications
In this famous example of a digitally altered scene of a British soldier and Iraqi civilians, featured on a Los Angeles Times front page, a duplication was spotted in the final image, which turned out to be composed from two separate images.
2. Look at the background
3. Ask basic questions about the circumstances of the photograph
Use your common sense to detect fishy details in photographs. Doctored photos were around a long time before Photoshop: what other features might have been engineered for the image?
4. Remember that some alterations might be considered acceptable in certain circumstances
See the Reuters Handbook of Journalism for the Dos and Don’ts of photo editing for journalistic purposes. Different organisations might draw lines in different places, too. In this blog post, James Estrin, of the New York Times Lens blog, describes his orthodox view of Photoshop use for journalistic purposes. “Less is more,” he says.
- Reuters’ Handbook of Journalism, Photoshop guidelines
- How Little Green Footballs discovered doctored photographs from Beirut
- eHow.com: How to Detect Faked Photos
- Photopreneur: World’s Most Infamous Staged Photos
- Photopreneur: World’s Most Famous Photoshop Fakes
- PC Pro: Spotting the Photoshop Fakes
- Washington Post ‘Photoshop Gone Wrong’ gallery
- Digital Forensics: 5 Ways to Spot a Fake Photo
- Digital Forensics: Altered Lance Armstrong Photo Explained
Judith Townend is a freelance journalist/researcher and a PhD research student at City University London’s new centre for law, justice and journalism. She was formerly a reporter for Journalism.co.uk where she wrote about digital tools and techniques for journalists. She is interested in seeking out new online research methods for both journalism and academia and is @jtownend on Twitter.
Tags: Adobe Photoshop, error level analysis, image analysis, image forensics, Photo manipulation