Humans are just slightly better than a coin toss at spotting AI pics, study finds


As AI-generated images continue to improve every year, one of the key questions is how human beings can distinguish between real and generated images. And while many of us may think that it’s fairly easy to spot images generated by AI tools like ChatGPT, Gemini and Claude, researchers think otherwise.

According to researchers from the Microsoft AI for Good Lab, the chances of being able to identify AI-generated images are ” just slightly better than flipping a coin.” Researchers say they collected data from the online game “Real or Not Quiz”, where participants were asked to distinguish AI-generated images from real ones and identify their authenticity.

The study, which involved the analysis of approximately 287,000 images by over 12,500 people from around the world, found that participants had an overall success rate of just 62 per cent, meaning they had a slightly higher chance than a coin flip when it came to detecting these artificially generated photos. For this, researchers say they used some of the best AI image generators available to create the quiz, and that the game was not designed to compare the photorealism of images generated by these models.

Story continues below this ad

As it turns out, people who played this online quiz were fairly accurate at differentiating between real and AI-generated human portraits, but struggled when it came to natural and urban landscapes. For those wondering, humans had a success rate of around 65 per cent when it came to identifying people, but could only identify nature photos 59 per cent of the time.

Researchers noted that most of the time, people had trouble distinguishing “images without obvious artifacts or stylistic cues”, but when it came to human portraits, the percentage was much higher because of our brain’s ability to recognise faces. These findings are in line with a recent study by the University of Surrey, which discussed how our brains are “drawn to spot faces everywhere.”

The study also found that AI detection tools are fairly more reliable than humans at identifying AI-generated images, but they were also prone to mistakes. The team behind the study also emphasised the need for transparency tools like watermarks and robust AI detections to prevent the spread of misinformation and said they were working on a new AI image detection tool, which it claims has a success rate of over 95 per cent when it comes to both real and generated images.

© IE Online Media Services Pvt Ltd





Source link

Leave a Reply