Look What AI Did to My Photos

Before

Continuing to rant on the topic of AI, I’ve actually been using artificially intelligent software regularly for more than a year. Sorta.

I’ve been playing with two photo editing packages that claim to be “powered”, in part, by artificial intelligence. For one of them, you know it must be true because they mention AI every third paragraph on the website.

Anyway, let me give you an example of what that looks like.

The photo at the top was taken during our trip to Ireland last month, at an overlook on the Ring of Kerry. Beautiful valley, terrible light. The sun was high and the sky was very overcast.

Despite making multiple adjustments to the camera settings, this was the best it could do.1 The colors are still muted, the sky has very little texture, and it doesn’t represent what I saw that day.

Ok, let’s let AI have a wack at the the picture.

After

Looks a whole lot better, doesn’t it. Those colors in the valley are much closer to what I remember, and the landscape has more texture and depth. But that sky?

Sky replacement, removing what was actually in that space and inserting another sky image, seems to be a big deal in AI photo editing software these days. You can choose from thousands, maybe tens of thousands, of sky pictures for the software to use. Or take your own. Mix and match.

I have to admit the final result looks pretty seamless. But it’s also phony.

For one thing, the GPS data on the photo says I was looking almost due south at the time so there’s no way the sun would be setting (rising?) in that location. For another, the clouds, which changed almost hourly over the day we were in that area, don’t look like any we saw.

I realize that almost anyone I showed this image to probably couldn’t tell that it wasn’t the actual scene. Maybe I shouldn’t be so picky about my photography being an accurate representation of what I saw and just accept the compliments.

However, looking at this issue more globally, I wonder if that image can even still be called a “photograph”. Maybe we call it an “artistic expression” instead. I don’t know but it’s something the language may have to deal with as more people use AI to alter the reality of their images.

There could also be a question of whether I am able to claim a copyright to that new image. After all, about one fourth of the work is not mine. If Nat Geo wanted to put it on their cover,2 do I have to share the fee and credit with someone else?

And those issues of ethics and copyright have been getting even more discussion in the creative world over the past few months since Adobe released a beta version of Photoshop containing something called “generative fill”. It’s a technology that allows the user to perform all kinds of very realistic-looking manipulations on their photos, including removing elements and adding others that were not there, using a library provided by Adobe if you like.

More examples of how AI is generating far more questions than answers.

Anyway, I never posted any of the photos I took from that vantage point (here is one I do like, looking in a different direction). Just for comparison, the version below is the best edit I was able to make with the “standard” adjustment tools, working without any help from AI.

DSCF1959

Better, but still not one I’m happy with. Just too picky, I guess.


1. I might have gotten better results if I had tried the same shot with my iPhone. Apple doesn’t call it “AI”, but the algorithms in their “computational photography” often does a good job of working with bad lighting.

2. Dreams are good, right?

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.