Agile has become somewhat of a religion in today’s software development world. There is a priest class: Agile coaches, Scrum masters, product owners. There are prayers: Daily standups, bi-weekly demo sessions. There are confessions: Reflection time at the end of every 2-week sprint. There are Sunday schools: Agile education and certification organizations. And of course, there are the Commandments: The 12 Agile Principles.1
Just like any other religion, a lot of things need to be taken on faith. There does not seem to be much scientific research done to verify the claims of Agile. There are just anecdotal claims of Agile helping this team or another, making them more productive. But there doesn’t seem to be so many peer reviewed and reproducible papers published in respectable scientific journals that thoroughly do research into the claims of Agile, and for instance prove that pair programming really improves productivity when it comes to senior engineers, or 2-week sprints are any better than 4-week sprints, let’s say.
As I mentioned before, Software Engineering is a relatively young practice, spanning only several decades. As such, there are the occasional ideas that come up claiming to disrupt the industry and shake it from its foundations, without much scientific backing. Agile certainly seems to fit this description.
At this point, you might be thinking that I am here to criticize Agile as a complete fad and reject it wholeheartedly. You are correct that I am going to criticize some major aspects of it, quite mercilessly indeed. However, I am not going to say it’s a complete fad or reject it entirely. Agile is a response to the pain points of the previous methodologies of software development, such as Waterfall. The waterfall methodology did have some real major pain points, and Agile indeed does seem to address those. There are parts of Agile that I like, respect, and follow in my own development practices.
On the other hand, there are parts of Agile that I believe to be unnecessary and even harmful. There are parts of Agile practices which were brought up with good intentions, but seem to have pernicious effects in the long term. From my observation, in a lot of use cases, Agile seems to devolve into a micromanagement tool, especially under clueless management who don't have any proper technical background.
I am also going to make the claim that Agile’s championing of the constant intermingling between business and engineering people usually results in the business people dominating the engineering folks. In a healthy organization, the engineering teams need to have a certain level of autonomy in order to do their jobs right. They need to be able to push back on the business interests to defend software quality as necessary. From what I have observed, Agile seems to frequently hinder engineering autonomy and therefore software quality.
So I will try to make this a balanced criticism of Agile. And I will make this claim: We do not have to accept the entire Agile methodology into our hearts as our Lord and Savior. We do not have to swallow the entire thing hook, line, and sinker. We can make a rational and critical analysis of which parts of Agile are actually worthy, and use those parts in our development practices. And try to avoid the adverse parts of Agile if we can.
The Pain Points of Waterfall and the Rise of Agile
Before Agile, there was Waterfall.2 According to the tenets of the Waterfall methodology, software development happens in distinct and separate phases:
Requirements gathering & analysis
Software design
Software implementation & development
Testing
Deployment & maintenance
The Waterfall model states that one phase should be finished completely before moving onto the following phase. This means, for starters, all of the requirements must be gathered and analyzed thoroughly before any sort of software design can be underway. Then, the entire software must be architected & designed in excruciating detail before it can be implemented in code. The entire implementation must be completed, and then all the parts and modules must be integrated, before passing the software to a separate team of testers. Once the testers approve the quality of the software, then it can be deployed to production servers.
After some point, it became apparent that there were some very serious problems with this approach. The software engineers realized that they were not fortune tellers with the power of Nostradamus to see into the future, and determine the entire set of requirements to be included and figure out how exactly the entire software was supposed to be designed and implemented from start to finish. It turns out software engineering is a different beast than most other engineering disciplines. You find out about some of the requirements and some of the design kinks after you start coding, in a lot of cases. Halfway into developing your code, you might realize that the product needs these additional requirements to make the customers happy, or that your architecture needs some additional tweaking to make the software more scalable.
The Waterfall methodology could not answer these concerns. It made the software development process sluggish and painful. For example, if there needed to be any changes made to the software design, the process had to restart in its entirety from the software design phase. The design document had to be rewritten, then the entire software had to be re-coded. Good luck with trying to re-integrate those updated modules. And best of luck in your testing phase. Usually making changes in one software module might end up breaking some code in another module. When a lot of changes are made simultaneously, that results in a lot of tests failing, and a lot of time having to be spent in debugging and fixing all the multitude of issues. It really sucks to be an engineer facing such issues. These are the times when an engineer starts thinking how their life has brought them to this point, and whether they made the correct choices in their life after all.
There needed to be a better way of developing software. A less painful, less sluggish, and more “agile” way of developing software.
And there came the Agile methodology into the world, shining like a bright beacon of light.
Actually it wasn’t just Agile. There were a plethora of development methodologies that started to spring up in the 1990’s and early 2000’s along with Agile: Extreme Programming, Scrum, Kanban, etc. They are all related in their philosophies and approach to software development. They share a lot of common points. That’s why I will be considering them all to be under the Agile umbrella, and use the term Agile interchangeably for the remainder of this chapter, for simplicity’s sake. I believe my comments and criticisms are going to apply to pretty much any of these methodologies equally, whatever you may wish to call them.
Agile’s foremost and most important claim is this: Software must be developed incrementally. For any given development effort, the software engineers need to work on a limited set of features, which should be small and limited in scope. The engineers then try to deliver a working software at the end of a brief development cycle (ideally a couple of weeks long) with only this limited set of features implemented. The entire development cycle of requirements analysis, software design, implementation, testing, and deployment needs to be done while developing this small set of features. Once the features are delivered and deployed, the engineers start with the development cycle again, for a new small set of features.
I personally believe that this core claim of Agile also happens to be the most correct one. Software does indeed need to be developed incrementally. You cannot follow the Waterfall practice of planning, designing, and implementing the entire set of features in a single development cycle, and expect good results. But if you implement a small set of features that are limited in scope and complexity, then software development becomes a lot more tenable. It becomes much easier to develop the software and bring it into a state where it’s functioning correctly without any bugs or defects. (On the other hand, I don’t like the idea of imposing a strict 2-week-long development cycle length. I will come to this later.)
The iterative mode of development also happens to be way more fun. Us software engineers get a huge amount of enjoyment from seeing our software run correctly after a round of development, without having to deal with an overwhelming amount of bugs and issues. Never underestimate the power of enjoying your job. It keeps you engaged and going for years and years. And after years of doing your job, you’ll often find that you are much better at it.
After applying the iterative development cycles multitudes of times, the resulting software can end up containing a rich amount of features and sophistication, capable of handling many different use cases. The iterative development process also makes it much easier to refactor the software with small incremental changes, and thus better manage its tech debt. Between the development cycles of adding features, you could also have development cycles of refactoring the code and reducing its tech debt. Therefore, the kind of software developed with this methodology can be very clean and maintainable, even if it is running a very complex task.
To make another emphasis on the positives of this development methodology: Nature also seems to favor the iterative way of development. Evolution also happens incrementally.3 The DNA code that happens to be the code of life gets developed in an incremental fashion by nature, with each seemingly small mutation undergoing the test of natural selection one mutation at a time. In the end, you get the amazing diversity of the tree of life, with the highly complex and sophisticated multicellular organisms. But there are no 2-week sprints in nature. The development sprints of nature might take up to tens of thousands of years, a luxury of time that modern corporations probably cannot afford.
Agile’s way of iterative development can also handle changing requirements from the business-side in a much better way than Waterfall can. With iterative development, the software system should be able to handle changing design and code with relative ease, to a certain degree. Bridging the gap between the business requirements and the software design/implementation could be challenging, and could require much back-and-forth communication between the business folks and the engineers in the corporation. Business requirements and software design can undergo multiple changes over the course of product development. Agile makes a good emphasis on working closely with business people, and coordinating with them on an ongoing periodic basis as the software is developed.
However, as I will go into detail later in this chapter, there should be some limits to how much the business people can change their requirements, and how accommodating the engineering team should be to such massive changes that radically alter the software design and architecture.
Iterative Development
In the early-to-mid years of my career, there was no official Agile or Scrum or Kanban development methodology present. However, I didn’t do any Waterfall development either. Waterfall methodology was already on the decline when I started my career. My development style was actually pretty iterative. This was the development style that I learned in my formative years as an engineer, and have stuck to since then. I owe a lot to all the companies that I worked for, where I learned this way of iterative software development. I know I might be giving you the vibe of someone who is stuck in their old ways and talking about the “good old days”; however, I must say I have always enjoyed this development process.
So, here is how engineers did development back then, between that brief period after the decline of Waterfall and before the official introduction of Agile with all its tenets.
At the beginning of each quarter, engineers would determine their overall objectives and the key results that they wanted to achieve that quarter. This would be determined through discussions with the management and the other relevant parties (PMs, business folks, etc.) Once they had their overall objectives set, then they would get to work.
At the beginning of the development process, the engineers had to come up with a design document, identifying and documenting the different tradeoffs they could have in their software architecture, and justifying the design choices they made. The design document would be an overview of the general software architecture. It could describe the interfaces between the software components and the rough overview of the data being sent between those components, but it wouldn’t go too much into implementation detail. And if during the course of the development there were changes to the general software design that needed to be made, the design document could be altered. Obviously, engineers aren’t perfect. Sometimes the design document and the actual software design would drift off from each other.
The design document did not only contain the software’s functional design, it also contained sections about scalability, latency, security, privacy, and most importantly, testability. The software engineers had to be thoughtful of these concerns from the very beginning, and bake them into the overall software architecture from early on. They had to think about how the software could scale into multiple servers, how those servers could be load-balanced, and what can be done to keep the overall latency low in the system. The engineers had to make sure that the software did not store any privately identifying information belonging to users, which could turn out to be a privacy violation. They had to make sure that the service endpoints were properly authenticated, and the data was stored with proper levels of encryption. They had to think about how the individual software components and the overall system would be tested. While their design documents did not include the list of test scenarios in fine detail (those would be developed along with the actual production code in an iterative fashion), they did mention the kind of testing frameworks that would be needed for unit, integration and end-to-end system tests, and a brief overview of how these tests would be implemented.
From the functional perspective, the design document included development milestones, and a rough overview of the iterative development to be undertaken. In an Agile-like fashion, the engineers were meant to first try to deliver an MVP (minimum viable product) and then build more functionality on top of it during each milestone. There could be multiple months or even multiple quarters between each milestone, depending on how complex and involved the product was. The development plan in the design doc and its deadlines had to be flexible, because during the actual development, there would always be new things coming up and slight deviations from the original plan. The engineers would take time to resolve every major issue before they delivered each milestone. They would never rush the implementation, work under crunch times, or do any unsavory things like that, just to stick with the original development plan in the design doc.
The design document would be reviewed by all the engineers in the team. If anyone raised any issues, the engineers would discuss and try to address them. The document would be altered as necessary. Once it reached a good state, it would be approved by the team. Then it was off to the actual implementation stage.
Before the engineers started implementing a particular milestone, they would create a list of tasks for each individual feature they needed to implement, as fine-grained as possible. These were called “feature requests”. The feature requests were tracked using a web app software tool. In the same software tool, bug reports related to issues that came up during implementation could also be created and tracked. If this tool reminds you of a Kanban board or Jira, then yes, you would be correct. All the feature requests and bug reports would be tracked on this software tool, as well as their states: whether they were still open issues, or were fixed, or could not be fixed due to various reasons, or whether they turned out to be duplicates of other pre-existing issues. During the implementation, each change request submitted to the code repository would be associated with an open feature request or an issue. Moreover, the software tool could also track the dependencies between the bugs and feature requests: i.e. This particular bug has to be fixed first, before this other feature request can be implemented, etc.
The actual development would consist of implementing “change requests”, which were meant to make changes in the organization’s source code repository, such as adding or deleting source files, or changing their content. Change requests would not only contain production code changes, but also test code implementations. A proper change request would usually contain more lines of unit test and/or integration test implementations than the actual production code implementation. The engineers would make sure that each production code change was covered by proper unit and integration testing.
The change requests had to be reviewed and approved by one or a couple of other engineers in the team before being submitted to (or merged with) the source code repository. During this peer review process, there could be a lot of back and forth between the author of the change request and its reviewers. There could be questions on the implementation or even on the design choices. The change request author had to either implement the suggestions of the reviewers by making further changes on the code, or had to somehow convince them that the code was good enough as it was already written. The author was expected to write a good description for the change request submission itself, and provide good documentation in the description regarding why and how the changes were being made. The change request would also have to contain the bug id (or the feature request id) from the software tool that tracked the bugs & feature requests.
Everybody’s code had to be reviewed, without any exceptions. When the tech lead wrote some code, that code had to be reviewed by someone else in the team. Even if the CTO of the entire organization wrote some code, it had to be reviewed by someone else. No code would ever be checked into the repository without the approval of each and every reviewer for that change request.
Once the change request was approved by the reviewers, the author would submit it to the code repository with the purpose of merging the changes with the official company code. The change request submission was an involved but automated process that went through a CI/CD tool. (continuous integration/continuous delivery.) For each change request submission, the CI/CD tool would first compile & build the software binaries for all the affected software packages, and then run all the relevant unit and integration test suites on them. If any of the tests failed, the change request would not be submitted. The author would have to make the necessary fixes, get any further reviews done, and then try submitting the code again. If all the tests passed, the code changes would be submitted and become part of the company’s code repository.
As you may notice, automated testing was a huge part of this development methodology. In addition to the unit tests for the developed functions, for every implemented feature there would be accompanying integration and end-to-end system tests. The software engineers tried to design the system and its code to be as testable as possible with the automated test suites that could be run by the CI/CD tool for each code change submission.
As the software code gradually built up in the code repository, there would come a time to deploy the software binaries to the actual production servers, where they could be actually used. Since this was an iterative development process, the deployment would also start to happen on an ongoing, periodic basis after the initial deployment. The binaries containing the most recent code changes would be deployed to the production servers once every week or two. The deployed binaries would be built from a code branch in the CI/CD tool where every unit and integration test had passed properly. In addition to Production servers, there were also QA servers. The binaries would be first deployed to the QA servers where further testing could be done which weren’t covered by the aforementioned automated tests, such as manual testing. If the binaries passed all muster, then they would be finally deployed to the production servers where they could be serving the real production traffic.
None of this development process was done in a vacuum. There would be periodic team meetings, usually once a week or so, where all the engineers on the team would meet up. During these team meetings, everyone would state what they are currently working on, and whether they have run into any issues or roadblocks. Any such issues would be discussed by the entire team, and the engineers would try to help each other find a resolution.
Now I must note here that during these team meetings, the engineers would never ask each other annoying questions like: “When are you going to be done?” or “Why haven’t you finished this task yet?” They would never try to micromanage each other. They would actually trust each other’s integrity, capabilities, motivation, and sense of work ethics.
The engineers would also periodically conduct meetings with the external stakeholders such as the clients, business folks, or the other teams that use the servers, etc. By "external", I mean anybody external to the core engineering team building this particular service. These meetings would take place once a month or in some cases a couple of times a quarter. Any important points that the engineers wanted to share with the external stakeholders would be discussed in their own team meetings first, where they would first try to come to a consensus as the engineering team. Any important tradeoffs in requirements and design would be shared with the stakeholders. The engineers would try to be as transparent as possible. They would try to handle the stakeholders’ expectations as realistically as possible. Understandably, they also expected the stakeholders to have the good sense to not change the requirements on the software too frequently or too radically. Any nonsensical requests would be rejected by a unified team of engineers.
As might be expected, things didn’t always go smoothly. The engineering team could occasionally have disagreements about how they should go with designing or implementing certain things. But after some rounds of discussions, they would usually come to a consensus.
Each engineer on the team had a strong sense of ownership. Each was responsible for the design and implementation of a certain part of the whole system. Each cared deeply for their particular design and implementation. They tried to make sure that the code they delivered was clean and maintainable.
The deadlines were ultimately decided by the engineers. The deadlines could not be imposed on the engineers by the Managers, PMs, the business people, or the sales people. These folks deferred to the engineers on what the deadlines would roughly be, and roughly when each project milestone would be completed. The deadlines had a large range, on the order of quarters. For example, the engineers would say that they would do their best to finish this particular project in 2 quarters, but sometimes it could take 3 quarters instead. The engineers did not feel pressured to do crunch time or cut corners on anything. They would make sure to deliver a quality product that they would be proud of in the end. They still also tried their best to deliver everything in a timely manner, because each of them had a deep sense of responsibility, integrity, knowledge, and a high degree of motivation. The engineers trusted each other and their managers, and their stakeholders trusted them in return. The engineers tried their best not to betray that trust.
The engineers and the business folks saw each other as equals, cooperating to deliver the best quality products to their customers. None of them tried to dominate one another. None of them saw the other party as their subservient.
These were the good times. These also happened to be my formative years. I genuinely enjoyed doing software development under these conditions, following this process.
Now you might say this process looks very much like Agile, that it is pretty much Agile without the name only. If that is the case, then I would be very happy to embrace this “Agile process” or whatever you want to call it, as it was. However, this process did not contain some of the finer points or tenets of Agile as they were officially introduced later on. And some of these official Agile tenets turned out to be detrimental to the software development process, as I will now explain.
Agilefall
Within the later years of my career, Agile gradually came into the scene and started to spread to various software organizations. There would be Agile coaches and Agile evangelists going from team to team trying to extoll the various virtues of Agile, trying to make the teams adopt the official version of Agile. Then the story I have told you so far began to change. I started to hear a different story from my friends and the various engineers that I knew from various different companies.
Let’s imagine this hypothetical team. At this point, once again, I should reemphasize the disclaimer that I stated in the Introduction chapter of this book:
Any characters, events, and depictions described in this book are a work of fiction and the products of the author's imagination. Any resemblance to actual persons, living or dead, or actual events is purely coincidental.
In this hypothetical team, the entire development process is driven by the business people, not engineers. The engineers are seen as subservient to the business people and are under a lot of pressure to deliver the software as soon as possible. Therefore the development is done in a hurry, which produces a lot of technical debt that accumulates over time. To say their code isn’t so clean would be an understatement. Their automated test coverage is severely lacking, which implies they have to do a lot of manual testing before each software release. Their server software is released once every couple of months, not weeks. Naturally there are lots of issues found in each release, which have to be addressed and re-tested manually, which means the software release process is delayed even further.
It’s funny how the pressure to deliver software faster by the clueless managers who don’t know anything about software development causes even slower software delivery. Let’s presume that the team’s management is very much aware of these issues. Nonetheless, they don’t really know how to exactly address these issues, due to their cluelessness. When an engineer tries to point out some of those issues, they aren’t really listened to, since they are just a simple engineer who are supposed to follow orders.
Instead, in order to address all these issues with their software development process, the managers turn to Agile.
Agile coaches come and start to make presentations to the team about how awesome Agile is, and how it is going to save them. The team’s management swallows all that propaganda hook line and sinker, and then the official Agile process starts to become implemented in the team.
In reality, none of the real causes of these issues end up being addressed. The tech debt stays mostly the same. The automated test coverage is never increased. The release/deployment cycles are still months long. There are still plenty of problems with each release. But on top of all these, the team’s engineers also now have to deal with Agile.
As part of Agile, the team now has 2-week sprints. At the beginning of each sprint, specific tasks are allocated to each engineer, with the implication that the engineer has to finish all their allocated tasks within those two weeks. Each task is meant to be assigned “estimation points”. Supposedly, these estimation points are just to be used to compare the difficulties/complexities of each task with one another. They are never meant to be equivalent to the number of days to complete the tasks, as the engineers are told by the Agile coach. In practice, the team members just assign the estimated number of days to complete a task as its estimation points, and get on with their lives. Hours of meeting time is spent every 2 weeks trying to assign estimation points to all the various tasks.
At the end of the 2-week sprint period, the engineers have to give a demo of whatever task they finish. Any newly added feature, any new text box or button on the product’s web page is to be displayed to their fellow teammates in a glorious demo session. Let’s say that at the time, an engineer is given the task of doing a migration of some of the backend database tables into some other tables. This is an intricate and complex task. When they ask their manager how they would be able to complete this database migration within 2 weeks or even give a demo session, they never get a straight answer. The manager would maybe say split the task into multiple tasks that could each be completed within 2 weeks. How one could evenly split a task like that into consistent 2-week chunks, or how one could give a demo session for each supposedly mini-task is never clearly explained.
For obvious reasons, very few of the engineers in the team are able to give a demo session every 2 weeks. In most cases, the engineers are never even able to complete all of their allocated tasks within the 2-week sprint period. When that happens, the engineers are asked why they couldn’t deliver. Most of them give excuses like “oh this task depends on something else, which is what I was waiting for”, or “this other thing came up which had higher priority”, or “it turned out to be more complicated than I imagined, I am still working on it”. Some of these excuses are actually very valid. A lot of times, the engineers are told to either split up the tasks into smaller tasks, or take on fewer tasks, which is all good advice. However, when one engineer takes on fewer tasks than another engineer, that makes them look bad, relatively speaking. So, no one really wants to take on fewer tasks, and everyone continues their time-honored tradition of not completing all the tasks within their 2-week sprints.
In the end, all of this looks like micromanagement. Which it totally is, of course. This is Agile being used as a micromanagement tool, in all its glory, by a clueless management.
This amount of micromanagement usually achieves the opposite of its intended effect in engineers. It does not increase their efficiency of doing work. It instead invokes an incredible sense of a lack of trust. Engineers feel not trusted by their managers and stakeholders. They feel that their smallest efforts are scrutinized. They feel taken for granted for the work they deliver, and negatively criticized for the occasions when things don’t go well. Their job turns into a thankless job. As a result, engineers start losing all their motivation, and all their care for the software they produce.
When the software engineers lose all their care and motivation for their work, terrible things start to happen.
In this case, the engineers are quite lucky that the software they are working on would not end up actually killing anyone when it fails. It might just cause annoyance in their customers. And plenty of annoyance it does cause. Their servers fail even more frequently than they did before. There would be even more bugs showing up in their system. The user reviews of their product take even more of a nosedive.
All of this means to the managers that “the team is not implementing Agile properly”. Something more has to be done. New and more convoluted development processes and rules have to be implemented.
No one realizes that no amount of additional process or methodology can replace the care that engineers have for their products. Once the care is gone, it’s gone, and nothing can ever replace it.
Nevertheless, the managers come up with new processes. Even if they end up having nothing to do with the Agile methodology itself.
As part of these newly added processes, the engineers are now told to write highly detailed design documents, which would be inspected very carefully by the reviewers, so that nothing would be left to chance. The design documents would now need to have every module and every algorithm documented in an unnecessarily long way. The engineers have to make sure they document every data structure being transmitted in the system in excruciating detail, leaving no field out of the documentation. Some of the design documents end up being 50 pages long. The design review meetings now involve highly detailed discussions of each aspect of the design. Some of the design review meetings take 3 hours to go completely over the design documents to discuss and approve them.
The Product Managers have to write long and very detailed requirements documents. The UX designers have to create highly detailed UI designs for their upcoming web pages. The QA team has to come up with extremely detailed test plans with a full list of test scenarios. The development team would need to have lots of meetings with these folks to discuss the fine details of the code they planned to implement, and try to refine these plans as necessary. All of these plans would be reviewed with intense scrutiny by the management, including the director, and signed off. After all this scrutiny and deliberation, when it comes to the actual implementation of the code, lots of the parts of these plans end up undergoing considerable changes. A lot of parts of the design docs end up going to waste.
Even after the extra (and completely futile) scrutiny in the design phase, things still don’t go according to plan. The number of bugs is still increasing. The servers are crashing every other day. The engineers who are on-call are receiving pages & notifications for the server crashes in the middle of the night, every other night.
So, even more needs to be done. The managers turn their focus to the implementation phase of their team’s software development.
During the implementation phase, the engineers are now required to implement flags for every single new feature or code change. Any new software feature or sometimes even a simple update to the existing production code needs to be guarded by flags.
Flags are basically if-then statements that execute a portion of the program only if a configuration variable (i.e. the flag) is set to true. The flag values are stored in a configuration file, and they are usually passed to the server binary when it first starts running, in the form of command-line arguments. There could also be more sophisticated configuration systems that enable changes to the flag values to be processed immediately by an already running server binary.
Flags are sometimes necessary, and have their use in software development. For instance, sometimes when you are rolling out a brand new feature, you may want to roll it out to a subset of customers in order to ensure it will be well liked and have the intended effect. Such gradual rollouts can be done using flags. If the customers hate the new feature, it is easy to turn off the relevant flags and roll back the feature entirely.
However, in this case, the management wants the flags to be used indiscriminately. Every single new code or feature has to be within an if-statement, guarded by a flag. As the management has recently learned, Agile is all about incremental feature development. Management thinks: We may as well wrap each developed feature in a flag in the codebase. When that new feature is released into production, its flag would be turned on in its config file. If any errors are observed in the production, then the flag would be quickly turned off, and the errors would hopefully go away. This is the management’s quick band-aid solution to the multitudes of production issues that were haunting their team’s systems and bringing down their servers. Instead of fixing the bad development practices that were the root causes of these issues, the management decides everyone should use flags left, right, and center.
Flags are not meant to be abused. There is no need to implement a flag for every single code change. If you have a robust testing system in place, any glaring issues with code should be caught anyway. If you start abusing flags, they start cluttering the code. The code starts filling with countless if-else statements, which makes it harder to read and understand. This is actually considered a type of technical debt. Too many flags make the code deteriorate. And even after a feature is completely developed and released without any issues, the associated flags are usually not cleaned from the code base, and can potentially linger on for years.
There is an even more serious problem with using too many flags: They can cause issues in the automated testing process. When implementing an integration test, it is easy to forget to turn on all the relevant flags, which means the code protected by the flag is never actually tested. Too many flags mean there are too many combinations of them that need to be properly tested. Improperly tested code is unhealthy code that is prone to failure. Too many flags make test development more difficult, and contribute to the failing health of the codebase.
Naturally, none of these new unnecessary decrees by the management works. The servers keep on failing, more so now than they used to before. Overworked engineers start looking for other, less dysfunctional teams or companies, and eventually leave this team in great numbers. They take their domain specific knowledge and experience with them. This team and their projects keep on suffering. The endless cycle continues, until the team and their projects are essentially no more.
Some of you readers probably realize that none of these pernicious processes I have described here has anything to do with Agile whatsoever. As a matter of fact, these processes are totally against the core Agile philosophy. They heavily hinder the iterative development process. Agile does not call for overly detailed design documents or flags for every new developed feature.
This entire hypothetical development methodology is a bad hybrid of Waterfall and Agile. It gets the worst characteristics from each. It is not Agile. It is Agilefall.
So some of you are thinking that we can do better. We can implement Agile in a better way. The adoption of Agile by this hypothetical team failed miserably, because the team was very dysfunctional to begin with. The managers were technically inept and implemented completely wrong processes in conflict with the Agile philosophy.
On the other hand, this makes me wonder, if a team is quite functional and productive to begin with, why does it even need Agile in the first place?
This particular question aside, let me address the initial question: Can we do Agile in a better way? Can it actually work?
Agile Can Fail Even When Done Correctly
As I already mentioned, I agree with some aspects of Agile, and I can certainly see why it had such an inspirational effect on a lot of companies and teams. On the other hand, I very much disagree with some other aspects of Agile. In time, I have come to the conclusion that the Agile process can fail spectacularly even when implemented in a “correct way”. To be more blunt, it is very easy for Agile to go down the slippery slope and turn into a micromanagement tool.
Estimation Points and 2-Week Sprints
The entire estimation point and 2-week sprint system is the gateway to micromanagement. Needless to say the Agile coaches claim “estimation points are not equivalent to the number of days it takes to finish this task”. They say “if you cannot finish all your tasks within the 2-week sprint, just try to readjust your expectations when you pick up tasks from the backlog for the next sprint, and don’t take on so many estimation points”, etc. However, in the end, the estimation points are still a unit of measurement to track the progress of each engineer during each 2-week sprint. An engineer who cannot complete as many estimation points looks bad compared to another engineer who can. This very much gives the impression that the estimation points are a tool used by management to pit the engineers against each other in a competition to extract the most possible work out of them.
Many managers claim that estimation is a very important part of software engineering. It is true that in a large organization, teams need to coordinate with each other. In order to do this coordination more effectively, the teams need to have a rough idea of how long each team is going to take to finish their respective parts of the overall project. In such a case, estimation is indeed a necessary part of project planning. However, in all my career, I have never come across an instance where estimation was absolutely necessary on a scale of days or even weeks. In most overwhelming cases, doing estimation on the scale of quarters was absolutely adequate. Even if one team missed their estimation by a quarter, it wouldn’t derail the project in any serious way. And to me, it seemed like the teams who did micromanagement missed more quarters than the teams who trusted their engineers to do a decent job.
Fine-grained estimation is unnecessary, because wise and experienced teams use certain software design and development techniques to minimize the impact of their interdependence.
Here is one example: Let’s say there is a team A whose software depends on another software that is being developed by team B. Early in their projects, both teams can agree on a rough interface between their respective modules. Then they can develop mock objects or test fakes for team B’s module interfaces. This way, team A’s modules could call those test fakes in an integration test, instead of calling team B’s modules directly. This allows team A to develop their software without having to wait for team B’s implementation to be complete. Even if team B misses their estimation, team A can still complete their implementation and can even run some tests on their own software. And even if team B’s interface needs some changes during development, it would be fairly straightforward to make the necessary updates to the test fakes and to team A’s actual code in most cases.
Unfortunately, some managers seem to love using blatant tactics to create competition between the engineers whether it is the 2-week sprints or stack ranking of employees during the performance evaluation periods. Those managers believe they can extract more work out of their engineers using these tactics, and become viewed as a supposedly successful manager who gets work done. However, such tactics usually end up backfiring, creating a horrendous work environment full of politics, bickering, backstabbing, cliquishness, and the worst of all, cronyism. In the long term, such teams become extremely dysfunctional and inefficient. It’s just not worth it. It is much better to create a team of individuals who are willing to mentor, help, and cooperate with each other. Organizations flourish and succeed with such healthy teams.
It would serve the teams better for the managers to simply just trust their engineers. The good engineers in any case always try to do their best and try to deliver everything on time with good quality. Micromanagement just pisses them off, demotivates them, and makes them care less about their work. The not-so-good engineers on the other hand would always find ways to skirt around the micromanagement anyway, by giving multitudes of excuses during each sprint review time. The best you can do with those ones is to give them actionable feedback, and give them some chances to improve.
Trust is built over a long period of time, and can be destroyed in an instant. Trust is also a two-way street. If the engineers see that their management trusts and believes in them, they will put in extra effort to earn that trust. Conversely, if the engineers realize that their management doesn’t trust them, they will stop making any effort whatsoever.
Periodic 2-week sprints may also not be so appropriate for all types of software development. Developing some set of features might take a week, while some others might take a month. It is true that one should always strive to split up a task into smaller tasks, but accurate estimation itself is ultimately an impossible task. There are always unexpected things that come up in development. You can always get the occasional curveball thrown at you in the middle of a development cycle, realizing that in order to complete your task, you first need to complete a bunch of other unexpected new tasks. Such things happen very frequently in software engineering.
Sometimes you might also be faced with a task which requires you to come up to speed on a new technology which you may not yet be familiar with. This could certainly throw off any kind of estimation. The Agile way is to file a task in the sprint that says you will come up to speed on this new technology. Good luck trying to assign estimation points to it. How could you accurately estimate how long you’re going to take to learn something that you know nothing about?
My advice is this: Keep the feature set in your to-do list small like Agile intends, but don’t limit your development cycle to a set-in-stone amount of time. Some features will take more time to develop than the others, and that’s ok. There will be curveballs, there will be occasions when you need to spend some time learning new technologies and coming up with a solution from scratch. That’s ok. That’s life. Just do your best to get to the next milestone in your incremental development cycle, where you can deliver the next working version of your software. If you are really stuck, then ask your teammates for help immediately and discuss what’s blocking you. If yours is a well functioning team, your teammates will be more than happy to help you out.
There is another important point I need to make: Creativity happens when a person is allowed to breathe. A lot of great products were invented because their developers were given time to think and freedom to pursue the things they wanted to work on. Their companies ended up flourishing way beyond the wildest dreams of their managers and executives. On the other hand, no such great creativity can be expected from an engineer who is constantly worried about how they are going to complete all their estimated tasks before the next 2-week sprint cycle.
Engineers don’t do so well when they are treated as some machine in a factory who needs to deliver a certain amount of work at every given cycle. They do a lot better when they are treated as autonomous and creative people, in whom their managers put their belief and trust.
Interactions Between Business and Engineering People
One of the 12 principles of Agile calls for: “Close, daily cooperation between business people and developers.”4 This is baked into the Agile and Scrum philosophies. There is a Product Owner who represents the business interests, and attends daily standups with the engineering team as someone with business management and business analysis skills. Agile gives the responsibility of making the final decisions to Product Owners on what work needs to be done by the engineering team in each and every sprint.
This sounds like a good idea first. Who wouldn’t like close and daily cooperation between business and engineering people after all?
It is actually a terrible idea.
Engineering teams need to have a certain amount of autonomy. When a Product Owner is interacting with the engineering team on a daily basis, they start to have too much undue influence. They might even start to dominate the discussions and override the opinions of the engineers. In the end, this is another bad aspect of Agile philosophy that enables it to devolve into a micromanagement tool.
This might even create a situation in which a patronizing Product Owner/Manager openly scolds a team of engineers when they fall a bit behind schedule.
Business people and engineers should work as equals. Engineers should not feel subservient to the business people, or vice versa.
Engineers should be the ones driving the actual development process. It is literally their domain and what they have been trained to do. While business people and engineers should definitely coordinate with each other during the software development process, this coordination should not happen on a day-to-day basis. Again that just devolves into micromanagement. The engineering team members need space to discuss the software design with each other in privacy, without the presence and dominance of a Product Owner, so they can go through all the technical tradeoffs in a more objective fashion.
There should be meetings with the product people at most frequently every 2-3 weeks or so. The engineering team should thoroughly discuss any technical tradeoffs in design internally, before sharing them with the product people. Likewise the product team should be able to clearly explain the business tradeoffs in the product strategies to the engineering people. The communication between the parties should be clear and honest.
Another item in the 12 Agile Principles is: “Welcome changing requirements, even in late development.”
It is true that software design needs to be flexible. As I already mentioned, there are always unforeseen circumstances that require the software design to be altered throughout the development cycle. This might happen due to the engineers not being able to foresee all the possibilities where the implementation might take them, which is a very common occurrence. This might also happen due to the business people not seeing all the possibilities when drafting the requirements. Some requirements might change and evolve during the development cycle, which is also acceptable. It happens. Such is life.
However, business people should not arbitrarily change requirements too late into the development cycle. This is a terrible idea, and should not be welcome.
Major requirement changes late into the development cycle puts a strain on the software architecture and design. Existing architecture may not be able to accommodate the new requirements and might need to be changed radically. This could create a lot of pain points for the software engineers and architects who may have to redesign and reimplement major parts of the software from scratch. This pain is increased many-fold if the major requirement changes become too frequent.
This could also heavily demoralize the team. Team members usually get demoralized when they learn all that effort they put into their product was in vain, and all their work has to be redone. After going through 2 or 3 cycles of this, there would not be a single engineer left happy to be working in that particular team. Attrition would grow, and the engineers would find ways to transfer to another team or to another company.
Another bad idea is to keep adding more arbitrary requirements to the project in an excessive way. There is a term for this phenomenon: Feature Creep. (Also called Scope Creep or Featuritis.) These additional unnecessary features could cause bloat in the software, making it overly complicated and more prone to failure. Feature creep is found to be one of the most common reasons for cost and schedule overruns in projects.5
When deciding on the requirements for the software, the engineers and the business people must make the best effort to communicate clearly with each other. All parties should be as transparent as possible. Everyone should handle each other’s expectations as realistically as possible. Engineers should not move forward with the software design & development until they are completely on the same page with the business people.
While some of the responsibility is on the business people, some of it is on the engineers too. There might be cases where the engineers do not manage the expectations of the business folks properly. The engineers can end up overpromising a lot of features that they would not be able to deliver in the time they estimated. They might think they would look good to their clients and save the day for the time being. However, this strategy always inevitably explodes in their faces in the long term. Their clients will be very unhappy when the arbitrarily promised features could not be delivered months or quarters down the line.
Engineers should always be honest and realistic in their dealings with the product people. In the long term honesty is always better than trying to look good to your clients and failing miserably.
And if the business people or the management come to the engineers with unreasonable demands, it is the duty of the engineers to push back.
Unfortunately, I haven’t come across anything in the Agile philosophy that relates to these concerns. To this day I haven’t observed any considerations in the core philosophies of Agile, Scrum, or Kanban regarding the overbearing product owners or the major & frequent requirement changes late in the development. The Agile Philosophy at its core seems to be clearly lacking.
Design Documents and Other Documentation
Some of the Agile practitioners seem to think that a design document is not really necessary. After all, the original Agile manifesto says “Working software over comprehensive documentation.” According to this philosophy, the code itself should be self-explanatory.
This is wrong. While your design document certainly should not be unnecessarily long or excessively detailed, it should still be written before the actual software implementation. A good design document is a valuable part of the software development process.
When designing any reasonably complex software, there are always decisions to be made. And for each decision, there are tradeoffs. The design document is where those various decisions and tradeoffs are documented. The design document should clearly explain the tradeoffs and the reasoning behind the choices made, as well as their potential consequences. It would be impossible to convey this kind of information in the software code itself, no matter how clean it might be.
On the other hand, it would be wrong for the design document to get too much into excruciating detail. Due to the nature of iterative development, the design is most likely going to evolve moving forward. Explaining the major tradeoffs and the major decisions in the design document should be enough. What is a major decision and what could be left out of the design document is a matter of judgment for the writer of the document.
It goes without saying that a design document should only be prepared after the requirements are precisely understood and confirmed. Engineers should first talk with the business people and get on the same page. A clear requirements document could help here.
These same principles apply to the requirements document and the UX/UI design mocks as well. While these documents might be necessary, there is no need to write them in too much detail. The requirements document and the UI design document should allow for some change and evolution during the iterative software development process.
Other Miscellaneous Issues With Agile
One of the ideas that got popularized with Agile methodology is pair programming. This is where two engineers sit together and work on the same project for hours, where one of them is typing on the keyboard and the other is giving some ideas. Luckily for me, I never got to experience this new brand of torture. I would like to work at my own pace, gathering my own thoughts, focusing on the job that I’m doing. Sitting with someone else for hours trying to get work done would be highly distracting. Some might think it could be beneficial for a junior software engineer to do pair programming with a senior engineer in order to improve their learning. I doubt that this happens too often. I think it would be a better idea for a junior engineer to ask their questions to a senior engineer, and then do their work at their own pace with the guidance they receive.
Last but not the least: Agile seems to have an emphasis on shared code ownership. This implies that the entire team is responsible for their entire collective codebase, and nobody can claim ownership to a particular part of the code. I personally believe this is another terrible idea from Agile philosophy. I have seen examples of shared ownership resulting in horribly maintained code. The class with 6,000 lines of code that I mentioned at the beginning of the book is one of those examples. No one bothered to refactor that class through all that time because no one was really the designated owner. Everyone thought someone else could take care of it, which actually never happened. And the class kept growing and growing into that monstrosity. This phenomenon is called the “Tragedy of the Commons”.6 I have another section of the book dedicated to the proper incentivization of engineers, where I make the argument that strong ownership of code (with certain caveats) is an important part of incentivization, and an important part of defending software quality.
Apply Only The Essential Development Processes
Certain processes are an important part of software development: Iterative development is a necessity. A good automated test coverage that includes unit, integration, and end-to-end system testing is essential. Peer code reviews are a must. CI/CD (continuous integration/continuous development) processes really help.
However, after a certain point, development processes stop being effective and start turning into a hindrance. Some managers love creating new processes to help justify their existence, notably in bloated companies that have hired way too many of them. (Please see the previous chapter about the phenomenon of Management Bloat.) These extra unnecessary processes have a way of turning into tools of micromanagement. At the very least, they might make the engineers feel like they are micromanaged, which is a very demoralizing thing in itself.
The best software organizations are those that implement just the essential development processes, and nothing more.
It pays to reiterate: In the end, no amount of process can replace the care an engineer has for the quality of their product. If you want quality software products, hire competent engineers who have integrity and who care, provide them with good training and resources, and then get the hell out of their way when they are doing their jobs.
“Principles behind the Agile Manifesto.” Manifesto for Agile Software Development, 2001, https://agilemanifesto.org/principles.html. Accessed 11 October 2023.
Petersen, K., et al. “The Waterfall Model in Large-Scale Development.” Product-Focused Software Process Improvement, vol. 32, 2009, pp. 386-400. 10.1007/978-3-642-02152-7_29.
Darwin, Charles. On the origin of species. J. Murray, 1859.
See footnote 1.
Davis, Fred D., and Viswanath Venkatesh. “Toward preprototype user acceptance testing of new information systems: implications for software project management.” IEEE Transactions on Engineering Management, vol. 51, no. 1, 2004, pp. 31-46. 10.1109/TEM.2003.822468.
“Tragedy of the commons.” Wikipedia, https://en.wikipedia.org/wiki/Tragedy_of_the_commons. Accessed 21 November 2023.