Author’s Note: This chapter is now freely available to the public. All paywalls have been removed.
Ever since I became a software engineer, I seem to observe that there have always been attempts to get rid of us software engineers. I have always come across various articles in the media proclaiming this or that technology is going to make software engineers obsolete, and that our days are numbered.
Perhaps this is because some people think that we are a bunch of "overpaid smartasses”. They think getting rid of us would save companies a lot of money and headache. These are probably the same type of people who think software engineering is all about writing a bunch of if-then statements and for-loops. So what could be so difficult about it? Why are all these software engineers getting paid so much?
Well, if you’ve read my book so far, I don’t think I need to explain this to you at this point. As it turns out, us software engineers are not such easily replaceable cogs.
When I was going to university, which was in the late 1990s (i.e. last century), I started hearing about those graphical tools that were supposed to make coding obsolete. Anybody could use these tools to implement a program. “Everybody could be a programmer now”, thanks to those tools. As it turned out, the graphical tools of that era were not that versatile or expressive. You still needed to write actual computer code to implement programs. You still needed actual software engineers.
In recent years, these graphical development tools seem to be making a new resurgence. Now they are called “low-code” or “no-code” tools. Like their predecessors, you can drag and drop boxes, connect them together, write some simple configurations, and voila, you have a working program implemented for you. Unlike their predecessors, they seem to be more expressive and versatile. In online forums, I frequently come across people who claim they are planning to use these no-code/low-code tools to implement a web app or a program with various features, without the need of any programmers.
These people are in for a rude awakening, for the reasons I’m going to talk about in this chapter.
Then in the recent year, just as I had begun to write this book, the new innovative technology of generative AI exploded into the scene. Now, AI can write letters, articles, and maybe even short stories. It can answer many of your questions. And according to some, it can even write great code.
Once again, particularly in the last months, I have started to come across numerous media articles claiming that software engineers will soon be jobless, and completely replaced by generative AI. Some claim there will no longer be any software engineers left existing within 5 years.
And once again, I simply try to ignore these articles.
That’s because in the last few months, I have been paying for a generative AI driven coding assistant service to use in my own software development projects, for experimental purposes. I can clearly say that it does not write great code. Well, sometimes, on occasion, it has been useful and given me valuable answers. However, other times, it has given me useless code.
Generative AI might get better within time, with more technical innovations. Regardless, even at its best, I only see it as a tool that makes my job easier and more efficient. I do not see it as something that will replace me anytime soon. Probably not within 5 years.
I will cover the subject of generative AI and its impact on the future of software engineering in more detail in this chapter.
Nevertheless, I might end up completely eating my own words. No one can know the future with 100% certainty. There are always surprises. My predictions might be inaccurate or just simply wrong.
That being said, it is currently early-to-mid 2020s as I’m writing this book. It has been almost 2 decades since I graduated from university and started my career. And us software engineers are still here, despite all the efforts to get rid of us so far.
It seems that the world still needs us.
Low Code/No Code Graphical Programming Tools
The modern low code/no code graphical programming tools are very much like their older counterparts that I came across earlier in my career. They all work based on similar principles. They all have a graphical user interface (GUI) that enables the developer to specify the programming logic by dragging, dropping, and connecting graphical program elements. For instance, there is a graphical box that represents a Sum function. The developer drags and drops it to the GUI canvas, which means that the Sum function is going to be used in the program. Then the developer connects outputs from some other graphical elements to the inputs of the Sum box. Now, all those values are going to be added together (or summed together). The result is going to show up at the output of the Sum box, which the developer can connect to another graphical element where that sum value can be used.
There could be boxes that represent branching or decision functionality, where only one input out of many is selected to be the output depending on a separate condition input. There could be boxes that create an array or list from various inputs, and there could be boxes that run a certain function on all the elements of that list. This would be equivalent to a loop operation in a more traditional language. There could be boxes where the developer can open a configuration GUI, and enter (or select from an existing selection) a function to apply to the input.
Basically, these graphical programming tools work on the principle of connecting basic processing elements together on a GUI, and optionally configuring them to run specific tasks and functionality. One can create some basic algorithms and even some impressive-looking apps using these tools. There are some existing examples of web apps and mobile apps created using such tools.
Unfortunately, the existence of these low-code/no-code tools might give an executive of a company some wrong ideas. They might fall under the false impression that they can replace all of the “traditionally developed” software components with the software that is developed by using these tools.
The point of undertaking such a project is to cut costs for the company. Some executives think they can replace the multitudes of software engineers who maintain the traditionally written software components with cheaper-to-hire contractors or vendors who could supposedly quickly develop the same software functionality using these graphical programming tools. Executives think that these contractors or vendors wouldn’t need any actual software engineering education or training, and would therefore be cheaper to hire.
However, life does not work that way.
Sure, the management might be able to replace some of the software, but not all. Sooner or later, such a large project would run into huge roadblocks, oftentimes after too much money and time have already been sunk in.
Here is the bitter reality: It is pretty easy to develop basic software functionality using these low code/no code tools. However, when more features are needed, when the software requires more complexity, these tools start falling very short. They are unable to handle that kind of complexity due to their very nature.
The more complex a software is, such as an entire business layer of a large system, the larger the visual graph generated by such a tool is going to be. It’s going to take lots of boxes and lots of connections between them to implement a complex functionality.
In such cases, these tools end up generating visual graphs of immense size and complexity that a human mind cannot make sense of. And as I argued in the very first chapter of this book, the whole point of software engineering is to develop software that can make sense to a human being.
The more “traditional” software languages such as Java, C#, Python, JavaScript, C++, etc. are much more capable of handling software complexity than the graphical programming tools. These languages contain constructs that enable the user easier management of software complexity: Constructs such as functions, classes, modules, packages, etc.
There are best practices among software engineers that have been developed through decades of collective experience. These best practices enable better management of software complexity.
From what I’ve observed, the programs generated by these graphical programming tools resemble huge webs of tangled hair, incomprehensible to any human being, including the creator of that mess.
In my career, I have never witnessed a project using a graphical programming tool to succeed. They never went beyond developing simple web apps.
The Way Forward for Graphical Programming Tools
Does this mean there is absolutely no future in software development for graphical programming tools?
I wouldn’t say that. Because there are already some current examples of a certain breed of graphical development tools that are being used in software development very successfully.
These are the graphical game engines used in video game development.
Game engines like Godot, Unreal, and Unity have become very widespread in game development. Since video games are very visual based platforms, it makes sense to use a graphical development tool. The game engine enables the user to set up the elements in the game scene and define the interactions between them. For example, the user can drag and drop images on a digital canvas to set up the game characters, players, NPCs, monsters, and the platforms that they can walk/jump on. The user can also use the game engine GUI to define configurations for the game, such as setting up gravity in a platformer game and determining which game elements will be affected by it.
So far this sounds suspiciously like the low-code/no-code development tools I mentioned earlier. So what makes game engines different? What makes them relatively more successful?
In my opinion, there is one important characteristic that the game engines possess, which also shows us the way forward for all the graphical programming tools in general: The game engines integrate seamlessly well with a traditional programming language. They make use of a traditional (i.e. text-based) programming language with full language features, instead of trying to supplant or replace it.
Sure, you can do a lot by setting some configurations on a game engine, and dragging & dropping some images. You can even create very simple games like this. However, when you have to implement more complex features in your game, or define more complex behaviors for your game elements, then you have to use a programming language. The game engines make this very easy to do.
To give you an idea, you can select your player game character image on the GUI, and attach some scripts to it. In these scripts, which are written in a bona-fide programming language like C# for instance, you can define how your character is going to interact with the other characters or objects in the game. You can develop a function to determine what happens to your character’s health points if they get hit by a sword or a bullet (depending on the type of game). In the script, you can call the various APIs of the game engine and execute the various tasks that the engine is capable of, such as creating a new game element on the fly, or changing the gravity of the physics engine, etc. Anything you can configure from the game engine GUI, you are also able to configure from the script itself.
Unity uses C#, Unreal uses C++, and Godot uses its own Python-like language called GDScript (as well as C#) to define these scripts. Since these are actual programming languages, they enable the application of software engineering best practices for clean code and clean architecture. The game developer can organize their code in a sensible way.
Game developers not only need to know about the features and capabilities of their game engine’s GUI, they also need to know the respective programming language used by the game engine very well. Game developers, like all other software engineers, have the responsibility to be knowledgeable about the software engineering best practices.
As I said, this could be the way forward for the graphical development tools. In the future, I can envision such tools used in the other software engineering fields besides video game development. Such graphical programming tools are going to have to work along with the more “traditional” bona-fide programming languages, and integrate with them seamlessly in their usage workflow. Not try to supplant or replace them.
And doing development using such graphical programming tools is going to require actual software engineers with proper training and education. There will be no getting rid of us so easily.
Generative AI
I was a couple of months into writing this book when the generative AI made a big splash in the news. Articles started to get published making claims like how software engineers are soon going to be replaced by AI and will no longer be needed within 5 years.
While I didn’t quite believe a word in these articles, they still piqued my curiosity. I thought I should try generative AI in my own software development projects, and get a glimpse of the future of software engineering. I decided to discover and see for myself the harbinger of the things to come.
I was working on a difficult problem that I wasn’t familiar with at the time. When I asked the generative AI how to implement this particular software, the answer it gave me quite literally amazed me. It was an elegant looking solution to a complex problem. I started to think that maybe these articles might be correct after all. For a brief moment, I started to doubt whether I should even keep writing a book on software development after all. Who is going to read such a book when there are not going to be any software developers existing in the future?
Then I started to actually implement the software solution that the AI provided me. Then the fantasy quickly fell apart.
I realized that the AI was completely making up some of the answers. On this one occasion, it was giving me some code containing Linux commands which did not exist in real life at all. The AI researchers call these “hallucinations”. Basically, the AI was hallucinating some of the answers.
Nevertheless, this hiccup didn’t dissuade me from continuing my experimentation with generative AI. I still kept on using it in my own software development projects.
So far, I have come to discover that when it comes to software development, generative AI is a mixed bag. There were occasions when I was quite impressed with it. It was as if the AI was reading my mind about what code I was about to write, and write it before I could. All I had to do was to press the Tab key to accept the suggestion. Yet, there were other occasions when I couldn’t get any useful answers from the AI. They were slight variations of a previous answer it gave me, which were useless in that particular scenario, or they were just outright completely wrong.
From the (more honest) research articles I have read on the internet, it seems that generative AI is correct around 60% to 80% of the time when it comes to providing answers to software development questions, depending on the benchmarks run.
From all this experimentation, I believe I now have a general sense of how software development might look like in a couple of years.
Until these recent AI innovations showed up, us software engineers have been using a combination of online internet search and our actual brains to come up with our solutions. As I mentioned in previous chapters, let’s admit it, no software engineer memorizes every solution. We still look up information online, quite frequently. A lot of times, we even look at our own previously written code, to see how we implemented a certain particular thing a while ago.
Now, generative AI is going to be thrown into this mix. I predict that we are going to use all these 3 tools at our disposal to find solutions to our software design and development problems: Generative AI, internet search, and our own natural intelligence.
Regarding that particularly difficult problem I mentioned before: Even though the generative AI completely hallucinated a crucial portion of the solution, it still guided me in the correct direction. With the combination of the AI answers, some googling, and by using my own natural intelligence, I ended up arriving at the correct answer.
And this, I believe, might be the direction of software engineering for the next foreseeable future.
But even this future has some particular issues.
The Crutch of AI
I must admit, it gets a bit tedious looking through the suggested code of the generative AI and trying to figure out whether it was implemented correctly. There have been quite a few times when I failed to catch a few of the AI’s errors which were luckily caught later on during testing. (Yes, it always pays to do thorough testing, especially when AI is involved in the development process these days.) Sometimes I cannot help but get the feeling that instead of doing software development, I am constantly doing code reviews for a somewhat erroneous software developer. And there are times when I get tired of all this, turn off the AI, and get on with some old fashioned software development.
This brings me to some important points: Junior engineers should be really careful with generative AI. It could be easy for them to blindly trust the AI’s suggestions and then get burned by the results. Also, I believe that an engineer can learn new software concepts in a much better way when that engineer is the one developing the code themselves. The learning process improves tremendously when one experiments with their own code, tweaking things here and there, and observing the results of the software run. A junior engineer who is constantly relying on the crutch of generative AI by copying-and-pasting its solutions will be missing out on some important learning experiences, in my opinion.
Can the AI Get Better? Would That Help?
Dealing with generative AI in software development brings a few important questions to mind.
The first important question is, how well can AI improve? Can it be correct a higher percentage of time than 60% to 80%? The corollary question is, would this help us at all?
It is currently a farfetched thought, but even if AI generates the correct solution 99% of the time, there is going to be that 1% time when it generates the wrong solution. This means we still need an experienced human engineer in the loop to check and correct the wrong solution generated by the AI.
Now, this begs the question: Aren’t we applying a double standard to AI? After all, us humans aren’t correct 100% of the time either. Not even close. We make mistakes all the time.
However, us humans work in teams within our organizations. When we make a mistake, there are other humans who can inspect the code, (hopefully) understand it, and help us fix it. We can also do online searches to see if anyone else has run into the same problem, and what kind of solutions ended up working out for them. This can still be considered as asking help from other human beings in a roundabout way.
When AI makes a mistake, if there is no human in the loop, how is that mistake going to be identified and fixed? Are we going to deploy another AI or two, and hope that they can catch the issues with the solutions from the first AI? What happens when they all make the exact same mistakes?
Us humans use tests to catch and identify the issues in our programs. AI can do the same of course. On the other hand, would the AI be able to come up with all the important test cases? Could AI understand the client requirements correctly, and ask back follow-up questions to determine the edge cases? Even us humans fail at this occasionally, and some bugs are only caught in actual production. When a customer files a bug report, would the AI be able to replicate the issue, identify its root cause, and be able to fix it?
All of these little questions lead us to the most important question of them all: Can the AI do everything that an experienced human software engineer does?
Coming up with algorithms is not the entirety of the software engineering profession. There is more to our profession. Can an AI get really good at understanding the business requirements, architecting a good, scalable, and maintainable solution, and coming up with a good strategy to design and implement the software systems? As I mentioned here before, can an AI get really good at identifying the root causes of issues and figure out the solutions for them? Can an AI write code that is easy to understand by us humans, and not just by the other AIs? Can an AI see the whole picture of an already existing large software system and make additions/modifications to it without breaking the other parts of it?
The answers to some of these questions might lie in the very nature of AI.
The Nature of AI
To a regular person, AI looks like magic. Its capabilities seem astounding and limitless. This is probably why there has been so much hype about the generative AI since its debut.
However, in the end, AI is just a complex software running on a machine. While it is a very complex and quite a marvelous system, it still has its limitations.
Before I go into the nature of AI, I need to make a disclaimer: I must state that AI is not my specialty. (My specialties are backend service development, and software tools development.) Nevertheless, I have had some experience with AI in my work, and have taken some classes on it. I would argue that I know more about AI than a random person in the street. Unless that random person happens to have a phD in machine learning.
The most important thing I want to say about the nature of AI is that AI is not exactly built like a human mind.
How do I know this? It’s because I know perfectly well that we don’t know much about how exactly a human mind works. As humanity, we are just beginning to understand how a human brain is built. We have just begun to map the regions of the human brain to their functionalities. We have just started to use advanced tomography techniques like fMRI to do this.1
When it comes to the human brain, we have learned about its operation at its largest scale and its smallest scale, but not in between. We are mapping the regions of brain functionality. We also know more or less about how an individual brain cell (neuron) operates and how it forms connections to other individual neurons through cellular structures called axons, dendrites, and synapses. On the other hand, we currently have very limited knowledge about the intermediate mid-level scale of brain architecture. How do neurons connect to each other to form the various regions of our brain? Are there any subregions within those regions with different sub-functionalities? How exactly do neurons connect and work together to express an emotion, recall a face from memory, solve a math problem, or become a conscious human mind? We are just at the beginning of our journey to learn about the architectural structures of the neuron connections in the human brain. The research is still ongoing.23
Brain and its neurons were the inspirations for building the AI that astounds us today. However, AI is not an exact replica of a brain. It is just a complex software system that imitates nature at its surface. AI is to the human brain as the wing of an airplane is to the wing of a bird.
The individual neurons used in AI’s neural networks are essentially mathematical constructs. When the first artificial neural networks were designed decades ago, their neurons were designed to approximate the neurons (brain cells) that we find in nature, with multiple inputs and a single output. Each neuron in an artificial neural network receives some numeric inputs either from other neurons, or some input sensors. Each input of a neuron is multiplied by some respective “weight” (again, some other numeric value). The neuron then sums up the values of its weighted inputs. If the total sum exceeds a certain amount, then the neuron is “activated”. This means the neuron outputs a certain value. If the neuron is not activated, its output remains zero. The neuron’s output is connected to other neurons or to the final output of the AI neural network.
The inputs of a neural network could be pixels of an image, where each pixel brightness is converted to a value and fed into the neural network. The output of a neural network could be bits representing an integer number. Such a neural network could be able to tell us the integer value of a hand-written number existing on an image, if one is written there.
I mentioned the weights multiplied by each input on each neuron in the neural network. These weights have to be specific values for the neural network to function correctly. These weights are adjusted by using various techniques (such as back-propagation). This is called “training the neural network”. The neural network is trained by using “training data”. In this particular example, the network is fed a series of images, where each image could be a different hand-written number. The neural network is also told what its output value should be for each image. The weights in the neural network are adjusted throughout this process. This is how the neural network “learns” from each training data: by the gradual adjustment of its weights. After the training is over, when a new image is presented to this neural network, it should be able to tell what number is written on the image.
This is an example of a very simple neural network. Such neural networks have been in existence for decades. The generative AIs that debuted recently are much more complex in structure. They are built using “deep learning models” and “generative adversarial networks”. I will not go into any more detail about these subjects, because they go beyond the scope of this book. (And also because I would need a phD in these incredibly intricate subjects to do them any proper justice.) In essence, generative AIs are built using neural networks like the one I just mentioned. The operating principles of their neural networks are very similar to this one: With their neurons and their weighted connections. In the case of generative AIs, their neural networks have a more complex architecture, and an immense amount of data goes into training them: Their neural networks have been trained by using the entirety of the data existing on the internet.
Some might say that the neurons in an artificial neural network that I described here might resemble the neurons in our own brains, in terms of their operation. Some might also even say that the deep networks used in generative AIs might resemble our neocortex with their multiple layers of neurons. However, I would make the claim that generative AIs do not work in the same way as a human brain does. These artificial neural networks do not work in the exact same way as their natural counterparts. They do not have the same architectural structure because we do not even know the exact architectural structure of a human brain.
In the future, we might be able to develop AI that has the capabilities matching or even exceeding those of the human brain. But I believe that we still have a long way to get there.
For the time being, us software engineers will have to keep using our very human brains to build large software systems. The software that we build is going to have to be clean and understandable to our mere human minds.
The Future of AI and Software Engineering
When the generative AI buzz was at its highest, a friend of mine, who is a senior finance manager at a well respected company, texted me and said: “Your profession is in danger. Looks like AI will replace all of you software engineers.” He was saying this after listening to too many “tech” podcasts about generative AI. My reply was “Yes, it’s a possibility that my profession will go away, but not before yours is automated too :-)”
Let’s say AI gets really advanced, becoming equal to or even superior to the level of a human mind. Let’s say AI now has excellent intellectual capabilities exceeding the brightest human minds. And now, it is ready to replace us software engineers for real.
Why would AI just stop at replacing only the software engineers? Why wouldn’t it replace the business people, the management, and the executives as well? If we expect AI to design and develop large software systems, why don’t we expect it to come up with product ideas, sales strategies, and ways to operate a business?
After all, executives make much more money than engineers. An average CEO gets paid around 300-400 times as an average software engineer these days, in terms of total compensation that includes stock grants. And that’s just the CEO. A company might have many executives, VPs, and directors with extremely high salaries. (Whether they deserve all that money is up to discussion.) Replacing an executive would save a lot more money to the company than replacing an engineer.
At that point, I should be able to ask the super-advanced AI: “Hey, could you please come up with a profitable business idea, find all the necessary funding and resources that you need, then implement the software, and start operating the business? And pay me a dividend of the profits too?”
Such an AI should be able to do all of these things autonomously, without the need of any additional human input.
This brings the following questions: If AI has replaced almost all our jobs, who is it going to be selling its products & services to? If most humans have lost their jobs to AI, they won’t be able to afford anything. Would the AI’s customers be the other humans who own an AI driven business? Would everyone have to be an AI driven entrepreneur in this kind of future? How could this all work?
At this point, we are leaving the realm of reality, and instead getting into the realm of sci-fi. Lucky for us, the sci-fi literature has covered the various scenarios of an AI future pretty extensively. Such a future could range from malevolent AI systems rebelling against humanity and driving us into near-extinction as in the Matrix and the Terminator franchises, to benevolent AI superminds administering our economy and government and ushering in an era of post-scarcity utopia as in the Culture series written by Iain M. Banks. Or it could be a future where humanity is instead the one doing the uprising and banning the construction of any sort of “thinking machines", as in the background setting of the Dune series written by Frank Herbert.
Personally, I have a feeling the reality will be a lot less dramatic.
With further advancements, it is possible that AI can end up replacing a bunch of jobs. But it can also end up creating brand new jobs, just like many other technological advancements did in the past.
Humanity has already been through many cataclysmic paradigm shifts. As I mentioned at the beginning of this book, we spent most of our existence as hunter-gatherers. Only in the last 10,000 years or so, we became farmers. And only in the last couple of hundred years, we built industrialized civilizations.
The changes are happening at an accelerated rate, within shorter timespans. However, during the ushering of each of these changes, there was a transition period. For example, when automobiles were first invented, the change did not happen overnight. There were many decades when the cars and the horse-driven carts co-existed on the same roads. If self-driving cars become a widespread reality today, then there are going to be AI driven cars co-existing with human driven cars on the same roads for many years to come, in most likelihood.
For every paradigm shifting change, there is a transition period, which I believe is necessary and beneficial. And I hope that when it comes to AI, this transition period is not going to be a painful one to go through.
AI and Software Quality
In the end, the AI we use today is a very complex software system built by very human software engineers. AI systems contain complex data pipelines that can handle enormous amounts of data in order to train their machine learning models. That data sometimes needs to be cleaned up of invalid values, depending on the specific machine learning problem. All of this requires the development of intricate software systems.
This means that every pitfall I mentioned in my previous chapters and every point I made so far apply to AI as well. When building AI, one must pay close attention to software quality.
Engineers, managers, executives, everyone working on building an intricate software system like AI needs to know the importance of software quality. They need to be aware of the software quality best practices that I cover at the beginning of this book. They need to pay close attention to the tech debt that will inevitably arise when building a software system with a complex architecture. They need to dedicate a continuous ongoing portion of their development cycles to dealing with tech debt issues. They need to apply the necessary processes that aid in the development of quality software. Processes like: iterative software development, design document review, peer code reviews, continuous integration/continuous development, and fully automated integration & unit testing. Especially the managers and executives need to be conscious of not putting ridiculous deadline pressures on the engineers, and not needlessly assign too many engineers to work on the same tasks for the misguided attempt to decrease the overall development time. They need to set proper incentives for their employees, and not use performance metrics that lead to the gaming of those metrics and ultimately to a toxic work environment. They need to emphasize strong code ownership that will resist the tragedy of the commons and prevent the decay of code quality. They need to emphasize good hiring practices for engineers, and not use any algorithmic puzzle interview questions that an engineer will have to memorize and will never end up using in their day-to-day work. And most importantly, the engineers themselves need to have solid software development knowledge, integrity, humility, and great communication skills. They need to always stand up for software quality.
Just like everyone else developing an intricate software system.
Defending software quality is even more important than ever, particularly if we are going to rely on AI systems for many aspects of our lives. Disregarding software quality is likely to have catastrophic consequences for our lives, both today and in the future.
Glover, Gary H. “Overview of Functional Magnetic Resonance Imaging.” Neurosurgery Clinics of North America, vol. 22, no. 2, 2011, pp. 133-139. 10.1016/j.nec.2010.11.001.
Reimann, Michael W., et al. “Cliques of Neurons Bound into Cavities Provide a Missing Link between Structure and Function.” Frontiers in Computational Neuroscience, vol. 11, 2017. 10.3389/fncom.2017.00048.
Lee, Byeongwook, et al. “The hidden community architecture of human brain networks.” Scientific Reports, vol. 12, 2022. 10.1038/s41598-022-07570-0.


