It’s near impossible these days to read any technology related site without getting swamped in stories on ChatGPT. Or tech-oriented forums, comment sections and so on. As users are exploring the abilities of this technology, more and more use cases emerge.
So today, on a Dutch technology site, somebody wondered whether it signaled the end for the ‘graphics industry’ – mostly meant as graphics designers.
And as these discussions flow: examples of those using generated artwork to build games. Examples from others how the generated images were near useless.
The usual internet bla bla bla
I don’t doubt those examples are all true. But also just examples. The overall trend is definitely that AI is getting more usable, and more useful. It can do more, and with each iteration of doing more, it gets better at doing it.
ChatGPT is definitely the shining example. Ever since its release to the public, the amount of speculation on how far it can go, is through the roof. It’s at the peak of the hype. Usually, after the hype calms down, you never hear from it again or it becomes part of that slow total evolution. The revolution promised during the hype…. usually never comes up. Hardly a problem, I’d say. An evolution is usually a lot nicer to deal with than a revolution.
Not all bla bla is created the same
The current hype around ChatGPT may be a lot of hype. Beneath it runs a much wider, deeper and stronger current, though. Artificial Intelligence, AI, has been a buzzword for quite some time already. ChatGPT shows the current state of the art. And no doubt it is impressive.
Yet, all that hype creates overreactions, such as the idea that the end of the graphic designers is near. One poster in those comments went as far to predict that industry will collapse by 2024-2025. Some call for overhauling the education system, since kids will have ChatGPT do their homework anyway. We can all enjoy more free time because this technology will do so much for us. And so on.
While I find the technology absolutely fascinating, I think some of these expectations and overly large claims are distracting.
It is not a technology that will go away. In a multitude of ways – visible and invisible – it will become part of things we do daily or often. I doubt it will change ‘life as we know it’, but it will have impact. Enough to think about what it is, and what it isn’t.
It has the wrong name
Artificial intelligence is, in my view, the wrong label. Yes, it is artificial. But is it intelligent? I think that is extremely debatable. To me, intelligence is very much about discovering unexpected or seemingly illogical correlations, extrapolating or interpolating ideas and thoughts beyond the stuff you already know. The ability to take existing knowledge, and use it to discover more knowledge. To connect things that logically may seem unrelated. To understand autonomously and without external help that you’re wrong. Taking lessons learnt from mistakes that are not directly related or all that logical, and improve in ways that are not related or logical.
I very much prefer the terms ‘Machine Learning’ or ‘Deep Learning’. It is much closer to what is really happening. Once a model is built out, the mechanism is great at learning to apply its rules and logic – the algorithm – to more and more data and in more situations. But fundamentally, it repeats what has been learned. More refined, but not fundamentally different. And what has been learned, is taught by a teacher. Human input. Pointing out mistakes in the model is also a human task. As intelligent as the AI may appear to be, it is actually clueless whether it’s right or wrong.
What does this have to do with photography?
Nothing, and a lot. Existing deep learning solutions have reached the point that they can generate images that look totally credible. Portraits that look completely natural and normal, except for the fact that the person in the image doesn’t exist.
So, who needs a human photographer anymore? We can easily have our holiday photos created. There are more than enough images of the Eiffel tower to have ChatGPT generate an image of the landmark from exactly the angle you’d prefer. And, again, this will only get better and better as it’s being used more.
But there is a flaw lurking there. The data-set from which the model can learn, will need fresh data. Else, that deep learning will remain stuck in a given point in time. So, somewhere, somehow, somebody will need to continue to add new fresh photos of the Eiffel tower. Else we stay stuck and things will continue to look how they did around 2022 in photos.
Not intelligent, but creative?
To me, the most interesting question is whether these deep learning models become creative as they’re able to create new, previously non-existing, images or texts. It’s easy to say “no of course not – it generates, but doesn’t create”. Certainly that was my gut reaction.
But how creative are most of us in their images? How many photos do we make that look a lot like photos we already saw before? Isn’t that equally just generating, rather than creating? Where is the boundary between those anyway?
World-famous sites where there are indications where to take your photo – how much creativity is that? If we claim it is creative, then so is Deep Learning generating images.
The one key difference between that human photographer and the computer-generated images: the photographer can spot the light is special, something unforeseen is happening, a detail that draws attention. And react to it, and come home with a rather different image. That good old-fashioned meat-based image creator still has added value indeed.
This is just a brain-fart. There is nothing to conclude. Technology marches on, and there is plenty to embrace. But let’s cut through the hyperbole. It’s still human-created technology that will need humans to help it evolve further and further. There are things that processors do better than our brains, but also plenty things our brains do better.
There is much more to it. Practical discussions like intellectual property and copyrights. Philosophical discussions on whether any item produced by a computer could ever be art. Down to earth discussions on how to make use of it to reduce mind-numbing repetitive tasks at work. And maybe some self reflection, on how to value on your own creative efforts and what they mean to you and to others. Do you make photos just for the result, or is the process of making the photos also relevant to you? If so, why care about that supercomputer doing the same?
Honestly…..If you’re afraid that ChatGPT is going to replace you as a photographer… just turn around, and see that unexpected image. Frame, click and feel good again.