What is a photo, really, and how can UX help?

Tiger Zhao
5 min readSep 5, 2024

--

Like many of you, I browse social media. Quite a lot, honestly. Most social media apps are visual, considering that around 80% of the information humans receive is through sight. This makes it easy for us to learn through visuals. But things are changing as AI comes into play.

We all know a photo’s authenticity can be challenged — Photoshop exists. However, by mid-to-late 2024, as a designer, I feel a certain tension when I realize my Photoshop skills can be “magically” replaced with a simple command — “imagine” — and that capability is now widely accessible through AI-powered, photo-enhancing devices. As this “reality-altering” power spreads, the impact needs to be addressed on a broader scale. In Diffusion of Innovations by Everett Rogers, a classic book on the spread of innovations, Rogers highlights the relationship between innovation and social change:

Diffusion is a kind of social change, designed as the process by which alternation occurs in the structure and function of a social system.

So, the widespread diffusion of AI photo-editing capabilities could lead to significant social change. I started to realize just how pressing this trend is after listening to this episode on The Vergecast, one of my favorite tech podcasts. The hosts discussed the unknown consequences of AI-enabled photo editing and its social implications. Fortunately, organizations like C2PA are working to provide guidance on potentially misleading visual content. Jess Weatherbed from The Verge created an excellent video explaining C2PA’s standard for keeping records of a photo’s edit history. In an ideal world, where blockchain could authenticate anything, this system would be the best way to verify whether a photo is authentic and untouched. Right?

Here comes my seemingly fallacious theory: no photo is ever “real.” Let me explain why.

It begins with the “bald man paradox”: Imagine a man with a full head of hair — no one would call him bald. But if he loses just one hair, he’s still not considered bald. Conversely, when a man has no hair, he’s bald. But if he gains just one hair, he remains bald. You see where this is going — at what point do we start calling him bald?

Bald Man Paradox
Not the kind of “half bald” I picture in my head but you get the point

The same logic applies to photos. If we define a “photo” as an accurate visual representation of reality, its authenticity can easily be challenged. Like the bald man example, after how many edits, even minor ones, can we still call it a photo? If we can’t define baldness by hair count, how can we define a photo’s authenticity based on slight alterations?

A three-time repeated same photo (?)

Look closely: image 2 shows an extra bird, and image 3 is missing a barely noticeable piece of flying grass near the elephant’s ear. I recreated the fourth bird, which never existed when I snapped the photo during my safari trip in Masai Mara. Most people wouldn’t think twice about it, and I didn’t even use Photoshop — or I could have added an Asian tiger next to the African elephant. 🐅

So, if reality is visually captured with minimal alterations — even by a single pixel — can we still call it a photo? What about image 3, where only a small piece of grass is missing? If we call image 3 a photo, how many pixels can be removed or edited before it stops being considered a photo?

Addition or subtraction — it’s not what happened in reality, but how much do they matter?

Similar to the bald man example, it’s becoming increasingly difficult to define a photo’s authenticity based on quantifiable changes, especially with how easily photo editing can be done nowadays — just like how effortlessly hair can leave our heads.

Sounds like a social challenge, doesn’t it? What can we trust with our eyes if no photo on the internet is truly “real”? Or, at what point do we start to care? Some UX experiments have already started addressing this dilemma. Instagram, for example, has added an “AI info” tag for content generated by or detected as AI-altered, offering users a brief hint about the image’s history.

“AI info” tag on Instagram (Image source: Meta)

But like the fourth bird example, a photo can be edited without AI and still distort reality. If that photo is uploaded to Instagram, would it get an “AI info” tag, even if it was edited without AI? I know AI is the buzzword these days and grabs attention when attached to an image, but does it accurately reflect the image’s edit history? Should an image cropped with AI tools be labeled “AI-generated”? Maybe we should use a more inclusive term like “image info”?

This train of thought doesn’t stop here. In a future where this technology is ubiquitous, will an “untouched photo” become rare on the internet, or even on our devices? Based on a recent comment from Isaac Reynolds, head of Pixel Camera at Google, that future may be closer than we think:

Your memories are your reality. What’s more real than your memory of it? If I showed you a photo that didn’t match your memory, you’d say it wasn’t real.

If my memories are my reality, do I live in a world of idealism, where materialism is just a version of virtual reality? What is essentially real and true to history? To those who believe, “What happened, happened,” a new tag like this might help you navigate the chaos of misinformation and fake news:

A “Real photo“ tag on your social media. You are welcome, internet.

PS: I don’t believe there is a single best solution yet to the speedy growth of AI, but I do believe there is always a better one built on the latest technological advancements and policies. In this potentially dystopian future where authentic photos are so rare that they need their own tags, I look forward to seeing big players craft a creative, sustainable digital space with healthy boundaries — and hopefully, I’ll be a part of the members building it someday :)

--

--

Tiger Zhao
Tiger Zhao

Written by Tiger Zhao

Designer. Thinks about AI and humans all (most of) the time.

No responses yet