AI will replace our jobs and that’s okay

Jeffrey Zaayman on 24 March 2023

In 2022, Jason Allen submitted AI-generated art to the Colorado State Fair and won first place. This caused an uproar and has since seen many art communities ban AI-generate art outright. Is this the right approach? I do not believe so. It is the equivalent of blocking out the service warning light in your car with a piece of tape, so you do not have to look at it. Ignoring something has never made it go away.  

However, this is not the first time that technology threatened the art world. In the mid-19th century, it was photography that was going to make artists extinct, and we all know how that turned out. So why should we treat AI-generated art any differently? Will some artists lose work to AI? Of course. But, like photography, AI will not eradicate artists.  

The software industry is experiencing a similar existential crisis. Many ask if the days of the software developer are numbered. The advent of IDEs, code completion and debuggers enabled developers to write faster, better, safer code. But now tools like GitHub CoPilot and ChatGPT reveal just how much of a developer’s work is mindless, repetitive keyboard smashing. The skill lies not in knowing how to write code, but what code to write—or generate.

Until we invent better ways, humans are always the first option to get new work done. In 1935, the NACA employed human ‘computers’ to do the necessary calculations needed for aircraft testing and the development of supersonic flight. But, after only 10 years, the ENIAC replaced those very people. The ENIAC was not better than a human being, but it was fast, consistent and did not make mistakes (except the ones we programmed into it). In contrast, the job of deciding what calculations were needed and what to do with the results remained very much in the ‘human domain’.  

I wonder why there is always such a fearful reaction every time disruptive inventions comes along and I think the answer is that it challenges our ideas of what it means to be human. If present-day AI is capable of writing a university-level essay, the problem is not with the AI, but with the way we teach and test students. Work that needs to be done by humans because no alternative exists isn’t what makes us human. Language models are tools, capable of producing work that was previously thought of as the sole domain of a human mind, because we could not imagine anything else. Once we get used to it, we will have to shift how we teach and test in the future. This is where the real disruption lies.

One might argue that the point of writing a book report is to test someone’s understanding. But if an AI can write the same report, then what it means to ‘understand’ must shift. I have heard a few people ask if we are okay if our doctors passed their exams because of AI, but this is the wrong question. Doctors are human and make mistakes. Software-based diagnostic systems have proven better than human doctors because they do not get tired after  the hundredth patient, are not distracted by personal problems and will never overlook an obscure combination of symptoms. Just because something has always needed to be done by a human, does not mean that a human should keep doing it.

A key aspect of being human is making mistakes. In contrast, speed and consistency is a vital requirements of any industry. These ideas are at odds with each other. Many developers will admit that half their day involves writing boilerplate code. We think that humans are so much better at many things, but we’re not. We forget, we’re not good at repetitive tasks (or multitasking) and we’re prone to bias. If we can replace human effort in these areas, we free ourselves up to focus on the areas where humans excel.

So, can machines think better than humans? No, not yet. But they can be more consistent and less prone to bias (assuming we train them on non-biased data).   An AI is also less likely to want to defend its biases than a human. I, for one, cannot wait for a world where sentences are consistent and not contingent on a judge’s blood sugar levels.

Humanity has always been in a race against its own inventions. We get bored of repetitive tasks and invent substitutes out of necessity. Civilisation has advanced because we freed ourselves from having to remain focused on staying alive. It is inevitable that we will create models that will be able to think as well as we do, but this will free us up to think in completely new ways, inventing areas of understanding never before imagined. These moments of disruption force us to redefine what it means to be human.  The human brain is limited in how much information it can hold onto at any given time. Since a machine has no such limit, it is capable of contrasting and merging much more complex ideas than any human brain. Who knows, future philosophers may all be AI.

Soon we will be able to brief an AI using natural language and it will produce a complete, working software program. When that happens, it will become clear that kind of work was not ‘human’, but needed humans to do it until we could find a better way.

AI is coming, and the role of the traditional developer will disappear. It will not be the end of something, but the beginning of a new journey until we invent the next disruptive technology.

The future is unknown, but being fearful has never stopped it or allowed us to engage with the changes it brings.