TLDR: if you do sprints you might not be actually doing Agile. At the same time, separating your process into development and delivery phases doesn’t necessarily mean you’re doing it wrong. However, hiding the actual methodology you are practicing under labels may do more harm than good. And if your organization’s development process performs linearly, don’t bury it under the agile elements.
In modern software-related discourse, “waterfall,” “monolith” and “yearly releases” are synonyms for failure. It seems that, unless you’re doing some variant of agile development (even in name only 😉), you’re doing it wrong.
So why is that? And is “waterfall” a straight road to disaster as it is so often portrayed?
To find out, let’s first look at the early years of software development.
The first software programs were written, not by Computer Science graduates, but by electrical engineers. These engineers didn’t use IDEs or copy from Stack Overflow – the earliest programs were carefully designed by hand, typically considering the circuitry and configuration of a particular computer system. After writing and validating the algorithm and operations, it was time to run the program. Execution took a lot of time as well, (though still faster than was humanly possible) and the computing time was shared among colleagues and peers. Thus, if your program made mistakes or crashed, you had to go back to the drawing board and retry later when it was your turn in the queue again.
Fun fact, but the first patching of a software program was a literal patch on a punch card hole.
And the first bug was a literal insect found in between the circuits.
Back then all these elements (planning, design, implementation, testing, and execution) took a lot of time. Since the cost of mistakes was quite high, careful execution was key.
Naturally, when writing programs became more involved and required an oversight project effort, delivered by groups of people, the chosen development methodology reflected that. It copied the careful and thorough approach to development that is normally found in real-world engineering jobs, where the cost of mistakes is high, i.e., used electricity, wasted time, and lost physical materials.
A “Waterfall[1]” methodology follows a linear flow, normally comprising:
[1] An important note – Dr. Winston Royce, who created the classic Waterfall model, did include opportunities for feedback (Royce, W. W. Managing the Development of Large Software Systems. In IEEE WESCON, pages 328–338, Los Angeles, 1970. IEEE.).
This approach showed its flaws when software development started to become more and more complex – it started to engage with this new thing called “the Internet,” being built on top of different technologies, and sometimes even requiring integration with third-party software packages.
And so, the problems of long planning and implementation met with the speed of technological advancements, which often resulted in:
At the same time the technology, computers and human resources became widely available and execution cycles got cheaper. Next to that, software development started to slowly embrace concepts such as automated testing, automated provisioning, and version control, culminating in the realization:
Making mistakes during software development no longer has to cost you a lot of money, provided you catch, validate and fix them quickly
Agile practices officially came into mainstream software culture at the beginning of the century with the publication of the “Agile Manifesto” in 2001. While the underlying ideas were not new, it was the first successful movement that provided clear focus in earlier practices like Kanban, SCRUM, Extreme Programming, etc. It put forward prioritizing collaboration, delivering functionality and flexibility in planning over rigid planning, extensive documentation and reliance on tooling and process.
Most agile methodologies are staying true to these principles. The work is delivered in iterations, allowing more frequent user feedback and course correction throughout the development cycle. Certain methodologies, like SCRUM, focus on releasing a functioning product increment with every iteration, however small the development team can make it – the important part is that it is released.
Essential elements of Scrum methodology
Since then, Agile practices rightfully became the “default” choice for software development projects due to their apparent benefits. Stakeholders are able to see the gradual delivery of the working end product with every iteration, while developers are able to course correct, improving their internal and external collaboration and minimizing the post-release work.
So far so good, right?
As you probably know yourself, in the real world many projects are still delayed / have problems with feature launches, users don’t always get what they asked for, and bugs still end up in production. So where is the problem?
There is, of course, no single issue that causes mishaps in software development, yet when it comes to methodologies a common one can be summarized as follows:
A methodology is way too often applied as-is, without taking a holistic view of the needs of a company, the team, the product being built, or the requirements of the product. The methodology (especially when it’s hyped and shiny) becomes a “silver bullet” that “will make us the next FAANG[2]” and is applied broadly to every aspect of the company, and often never re-considered again.
Seven years ago, every second post on LinkedIn tried to include “blockchain” when describing their product. Afterwards it was “microservices,” then it was “machine learning,” and in 2019 it was “AI”. Of course, some of it can be explained by a marketing department making sure to keep the company in the discourse by keeping up with the latest trends, but if “AI” brings no benefit to your business functionality, leave it out of your next product-goals meetings.
The same applies to a methodology – when it is just pushed onto a team without prior research (or because it has a “cool name,” like the Spotify model, which was never actually used at Spotify), it becomes a burden on the team. Ask yourself: “have I ever encountered any of the following in my day-to-day life”:
[2] Facebook, Apple, Amazon, Netflix, Google
It happens, when a company operates in one methodology, but “pretends” to be something else. The most basic example is when you turn every step of the waterfall process into a collection of sprints for the sake of it being “agile.”
In the end, you get the benefits of neither and drawbacks of both.
So, when no methodology is a silver bullet, how do you choose and apply one for your team? Let’s lay some facts on the table.
Looking broadly at Agile and Waterfall, the differences can be summarized as follows:
To identify the most effective approach, ask yourself the following questions before starting a new project:
If you answered yes to all of them, “Waterfall” can actually be good choice when executing your project continuously, with a known context, destination and set of tools and resources.
But if you answered these questions with something like:
You might lean more towards “Agile”: implement, release, and show the product to your customer, regroup, rethink, and try again.
In the end, each individual situation will pose unique challenges and opportunities, where dogmatic thinking about approaches to software development doesn’t help. And perhaps it is about time that modern companies step away from sticking to one rulebook all the time and take a more “gearbox” approach to delivering software products, shifting in the appropriate gear depending on how fast (or careful) they want to go.