The story of how Agile working took over the world:
In Snowbird, Utah, in a small resort nestled in the mountains just east of Salt Lake City, a conference gathered together in 2000 made up of the senior software engineers and architects from a number of development companies, driven by a shared set of problems that they had found that were endemic to their operations. Their belief was, at the time, that software development processes had stagnated because building software was treated the same way as the construction of an aircraft carrier or a massive convention center.
Specifically, there was a process (or methodology) that was used that collectively became known as the Software Development Life Cycle (or SDLC), though informally, it took on the name Waterfall Methodology, because in theory each stage of the process should naturally flow in cascades to the next.
These architects describe such a process as being prescriptive - in effect, the design is worked out completely before any coding on it is ever begun, and then, by the end of the project, everything should work out perfectly. Their experiences had been, however, that nothing ever worked out perfectly, and that by the time this realization was made, millions or even billions of dollars had been wasted in the process.
The group at Snowbird put together a manifesto that described the Twelve Fundamental Tenets of what they dubbed the Agile Method, and like Luther's famous Twelve Articles that laid the foundation for Protestanism, this summary would become known as the Agile Manifesto.
The principles of Agile working are:
- Our highest priority is to satisfy the customer through early and continuous delivery of valuable software.
- Welcome changing requirements, even late in development. Agile processes harness change for the customer's competitive advantage.
- Deliver working software frequently, from a couple of weeks to a couple of months, with a preference to the shorter timescale.
- Business people and developers must work together daily throughout the project.
- Build projects around motivated individuals. Give them the environment and support they need, and trust them to get the job done.
- The most efficient and effective method of conveying information to and within a development team is face-to-face conversation.
- Working software is the primary measure of progress.
- Agile processes promote sustainable development. The sponsors, developers, and users should be able to maintain a constant pace indefinitely.
- Continuous attention to technical excellence and good design enhances agility.
- Simplicity--the art of maximizing the amount of work not done--is essential.
- The best architectures, requirements, and designs emerge from self-organizing teams.
- At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behavior accordingly.
Fast forward twenty years. Agile working has become so mainstream that Waterfall Methodology has become a pejorative. You have agile teams, agile workshops, agile tchotchkes, agile books, and polo-shirt wearing agile coaches and scrum-masters have replaced besuited business consultants.
Agile working differs from Waterfall in a few key ways. One of the first is the shift away from prescriptive design to adaptive design — you build towards a short term goal, compare it with where you want to be, then adjust what you're working on to better reflect a new understanding of the goals at hand. Your business cycles go from six-month intervals to two-week intervals.
The assumptions behind Agile working:
- that business people must remain in the loop throughout the process,
- that larger teams work better than individuals,
- and that software-developers, being closest to the project, should ultimately be given more power and control over how the software is created.
I was not at Snowbird in 2000. I had written a set of articles about the limitations of Waterfall Methodologies called The Architect and the Gardner in the late 1990s (that has now disappeared in the pre-blogging world of the early Internet), and actively cheered in print when I saw the Agile Manifesto published because it addressed many of the same issues that I'd been seeing.
Yet over the years, as I worked on a number of very large projects as a data architect and then CTO, I began to become disenchanted with the Agile Manifesto and where it was going, because while it was never directly implicated in the failure of a number of high profile projects, almost invariably Agile was a part of those projects.
What's more, even as its champions vehemently denied that there were problems with it, they often made an argument commonly seen in religious communities that the failure was due to not following the Manifesto closely enough. This attack on the lack of purity of the participant, rather than on the methodology itself, was ultimately what made me rethink my previous stance about the Agile framework.
Over time, I began to put together some observations about the nature of software development that I've tentatively identified as the Tao of Project Management.
It does not by itself invalidate Agile, and it recognizes that Agile works best in certain domains, but that trying to force-fit everything into an agile model can be just as bad as trying to make everything a waterfall process.
I treat these not as dictates, but as koans.
Below I will outline some of the downfalls high performing teams often encounter when Agile working, and my recommendations for overcoming them.
Lack of consistency in vision
A sculptor may free the sculpture from the stone, but no two artists will free the same sculpture.
Consistency of vision matters. There's a very egalitarian viewpoint in the Agile Manifesto, and while I normally believe that egalitarianism is good in social structures, when it comes to creation, I've very seldom found that creation by committee actually works.
Many of the most successful software projects started out as the brain-child of one person:
- Tim Berners-Lee laid the foundations for the World Wide Web,
- Ted Codd put in place the principles that would establish the relational database,
- Linus Torvalds would be instrumental in creating Linux and guiding it through an arduous path to acceptance.
This is not to say that groups or teams are unimportant. What makes them important, however, is that they are capable of working with the creator to refine the original idea, to test, enrich, flesh out, and describe the original concept.
All too many projects fail because there is no one consistent vision, no arbitrator who decides that something should be within or outside the scope of the project at hand.
Without that, too many projects fail because the original vision was never established. This can also be thought of as "too many cooks spoil the broth".
Recommendation: On projects, designate an 'author'.
This is the person who is passionate about the vision of what the project should be. In the studio world, this person is frequently called the writer, show-runner, game designer or director, and in that particular universe, they are ultimately the ones that determine what will ultimately be produced.
For every task, there is a season.
When the Manifesto was first written, software development covered a very broad domain of activities because, in order to perform those activities, you generally had to develop the software tools as part of the process. That is far less true in 2020 than it was in 2000.
To illustrate this, consider again a game or movie. Are 3D modelers software developers? In the very broadest sense they are because they are creating virtualized models, but the process typically involves:
- first establishing a broad look and feel,
- then utilizing a software modeling package like Maya,
- and finally mimicking the actions of a sculptor through the software.
The game mechanics are certainly a programming activity, but even there, it is likely that these mechanics involve using tools (e.g., the Unreal Engine) to parameterize certain actions in a pipeline.
People who design shaders are generally very skilled and may very well be perfectly capable of building algorithms, but even there, what they are doing is building tools that then allow the rendering pipeline to take the animated meshes and paint them into a scene.
The same process can be seen in data-centric applications. A typical data science shop has different stages in the data lifecycle:
- from acquisition to cleansing to validation,
- and from there to entity extraction,
- semantification (putting the information in context),
- data analytics,
- and ultimately decision-making.
Agile simply does not apply as much when you are in the realm of tool-users or subject-matter-experts.
First, Agile is an iterative process - it assumes that the same team will be refining a single product, when in point of fact, most processes today are perhaps more factory-oriented. This means each "team" (quite often a single individual, and in more and more cases no person at all) takes a partially finished product, refines it in their own unique way, then passes it along.
The iterative process ultimately disappearing weakens the case for Agile working considerably.
Recommendation: Examine the way that your organization works.
See whether teams are in fact being utilized for discovery activities where Agile working can work, or simply processing activities where Agile can be an impediment (since the wrong goals get stressed and resources deployed).
It could be that a better deployment would be to reorganize around specific tasks that can be varied somewhat but generally follow clear parameters and requirements.
A golden spike does you no good when the tracks don't meet.
In 1896, only a few short miles from that historic meeting at Snowbird, project foremen for the Union Pacific and Central Pacific lines made a disturbing realization. These two lines (which would become the first transcontinental rail line in the United States,) were to come together in an elaborate ceremony attended by the various big wigs. However, the engineers came to the realization that the math was a bit off, and when the tracks ultimately reached the same location, they'd miss one another by about fifteen feet.
Word passed back up the chain, and the decision was made at the highest levels to delay the ceremony two days, while both track-laying teams worked feverishly through the next days and nights to adjust the last mile of track on one side to fix the problem.
When the golden spike was finally struck, the problem had been solved, but every project manager knows full well the nightmarish feelings you get when the boss is coming to see your work and you've goofed.
This was an integration problem, and it is a problem that all too often becomes an issue when you have multiple Agile teams working in tandem.
Agile teams become very adept at working within their own domain, but when they need to coordinate with other teams, things fall apart.
This is a systemic problem, and a hard one to catch unless you have some kind of architect whose primary purpose is to make sure that such communication takes place at the time it needs to.
Agile does not talk much about such architects, because its focus is solely upon team-driven software development - there's very little in fact that talks about integration because integration by its very nature falls outside of team activities.
Integration, however, is increasingly the norm for software projects as the scale shifts from application-centric to enterprise-centric development. This means that:
- software is increasingly dependent upon classes of entities and relationships between those entities that people have little to no control over,
- setting up a services architecture is not just a local issue but ultimately requires integrating with existing standards,
- programmers are increasingly having to think holistically, and that in turn means having specifications that they do not control.
Role inflation has also played a factor. It is not at all uncommon for larger projects to end up with teams of "architects" who see the role as a pathway into management. Sometimes they view it as being a "super-programmer" or software designer role (which is technically speaking closer to an author role given above).
In many cases, they add very little value to the project, and I take it as a warning sign when I see a broad team of architects on the manifest.
Recommendation: Put your architects into the role of 'integration specialists'.
These are the people that should be coordinating between teams to ensure that there is consistency in data standards, application interfaces, and user experience. They shouldn't necessarily be doing the work themselves (though often they may end up doing just that) but rather should act to make sure that integration concerns are always at the forefront of development (instead of an afterthought).
This, by the way, is the role that editors play. Most people view editors as people who proof content, but that's usually a very secondary function. Instead, the editors typically are there to ensure:
- the work that various teams are performing are all moving in a consistent fashion,
- communication takes place,
- problems that arise at a systemic level are addressed,
- and that ultimately the product achieves its objectives.
It's a management role (perhaps THE management role) but it also requires the technical expertise to see problems before they become unmanageable.
The outdated usefulness of the 'MVP'
It may be possible to live in a half-built house, but would you want to?
One of the key concepts involved with Agile working is the notion of Minimum Viable Product, often abbreviated to MVP. Minimum viable product is a conceptual standard, implying that at any given point in the process, it should be possible to take the product from the development team and have it work well enough to be functional.
The problem with this notion is that the product generally doesn't work well until fairly late in the development cycle. And even more to the point — it is becoming less and less relevant to the modern world.
When you are developing a module or component, it is standard practice to develop unit tests that test not only the functionality of the component but the degree to which it handles inputs and outputs from the rest of the system.
Even then, it is very typical that the developer will create test or dummy data that feeds the components because the data that they are actually dealing with is not yet available. The problem with dummy data is that it is almost, by definition, not reflective of real-world conditions, and as such the components being tested are likely to fail when such data actually arrives.
While this hearkens back both to doing development appropriate to the material at hand and the issue of integration, the problem with it is that in attempting to build a working prototype, the changes involved guarantee that this minimum viable product, well, isn't.
The biggest downfall of the MVP:
This can give a false sense of security to a client, who believes that the application is ready to go in its current state. This typically leads to a lot of friction when the client who, having seen something that appears to work, becomes more and more agitated as the apparent progress stalls.
In the worst-case scenario, the team can end up believing this too, meaning that critical work doesn't get done because there is so much of an emphasis on demos. This is a lot like the testing conundrum in education: if you don't test enough, there's no real way of measuring progress, but if you test too often, you're not actually teaching a subject, you're teaching a test.
An honest reflection — no method is perfect.
Data-centric applications are generally notorious for this problem, by the way. These require a great deal of design, of testing, of integration, and typically do not provide a lot of feedback early on. At some point, however, there's enough structure and information that the switch can be flipped and the client can actually see the working product at a stage where it really is minimally viable.
Note that this holds true in the Studio Model approach as well. In a movie, for instance, scenes are not necessarily shot in order, and while there may be some iteration, in general the pressure is there to do as few takes as possible while still getting a good set of potential scenes, then only shooting new takes if what was shot didn't work.
The film as a completed work generally will only exist in the director's head until about 80% of the way through production, at which point, "things come together magically" — or they don't.
Modeling makes it easier to push some of this production into a rendering post-production pipeline, but even there, iteration is only used when absolutely necessary. And even then, the iteration only happens fairly late in the process.
Recommendation: resist sharing progress updates merely for the sake of sharing an update.
When writing a book, it is typical to write several drafts, sometimes with dramatic differences, but this is usually not a team effort. The process of producing a book is usually one where the minimum viable product is only feasible late in the process.
This holds true for almost all intellectual works, including software; the primary benefit for MVPs are stakeholders that are more concerned about their investments than the products being produced.
As such, the metrics involved in determining progress are only really meaningful once a product is well underway of being produced. Resist the urge to want to show intermediate progress unless there is tangible benefit for the product, not just the investor.
Change is a constant, but the cost of change is not.
One purpose of any software methodology is to effectively manage change. The charges that the original creators of the Agile Manifesto made is that Waterfall did not handle change at all. However, this wasn't really as true as this particular document made it out to be.
First, it's important to understand that change is not really a software problem - it's a decision problem. With software, by working with the concept of MVP, the reasoning goes that you can realize that there are issues with the current approach and make changes to the MVP at relatively little cost early on, whereas, in situations where you have a much longer reporting and evaluation cycle, change can be costly.
One aspect of design is that you can effectively model a piece of software early on and explore ramifications even before it gets committed to code. Once committed to code, you build up dependencies, in that code, and the costs begin to rise because changes impact other systems. This is, in fact, part of the purpose of proofs of concept (PoC).
The hidden cost of not embracing 'proofs of concept':
PoCs are not full-featured entities. They are, instead, something that makes it possible to preview what a piece of software could do. A painter would call a PoC a sketch, a sculptor would use the term maquette, while a game designer would call it a demo or storyboard.
The production of such PoCs is perhaps the closest thing to being "Agile" in the Studio Model. They allow the author to play "what-if" games, to consider various scenarios, and to decide, based upon those scenarios, what would be the best approach to take before committing significant resources to the final product. They are part of the design process.
There's a certain amount of redundancy and "waste" that goes into the Studio Model, though it's not really wasteful in any meaningful sense. Sometimes, what works on paper will seem awkward for a given set of actors, but this can only be discovered by experimentation. Sometimes, an actor will spontaneously say or do something that just fits, even if it wasn't in the original work. It is here where experimentation pays off.
In 2000, the cost involved in making such PoCs or exploring alternatives was high enough that companies were resistant to building them, despite their obvious benefits. Today, the speed of computer systems and the sophistication of tools make it far easier for a small team (or even an individual) to make PoCs at marginal cost.
That means it’s now considerably cheaper to experiment with multiple approaches, and then pull the ones that work best together. This is not agile in the traditional sense — you're not creating iterations, rather, you're conducting experiments and expanding the range of options available to you in building that software.
A final point on the 'move fast, break things' adage often embraced by Agile teams:
The cost of change in a broader project increases dramatically the closer you get to completion because of systemic interdependency costs. Changing a story while it is still in manuscript form is relatively inexpensive. Changing it after it has been laid out, proofed and scheduled for printing becomes a much more difficult process.
The old adage about measure twice, cut once is just as true in contemporary, sophisticated projects, and this means that spending more time in the playful, experimental proof of concept stage is almost always a better proposition.
Recommendation: Embrace the importance of the design stage.
Design is the playful part of production, but it is no less important because of that. Rather than going vertical with an Agile approach, task each team with creating different PoCs to explore possibilities, then build "genetically" by taking the best of each solution. Repeat as necessary, then commit your resources moving forward.
Most non-tech managers would rather sit through a root canal than sit through a scrum.
Here's a secret: the average business manager has absolutely no clue what they want in a piece of software (or almost anything else for that matter). They often want:
- What they already have, but with the new buzzwords and features that their competitors have.
- A magic box that will give them the answers that they want (with accompanying charts, graphs, and tables) when they speak a question.
- Something that doesn't require them to learn a whole new product with too many complex nobs, dropdowns, menus, and so forth that they have to learn the use of.
That's also what customers want. The role of the software developer is to hide the complexity of getting there. This is one of the reasons why Agile projects fail as often as they succeed.
Managers are often not interested in seeing the day-to-day, or even sprint to sprint, evolution of their products. Typically, rather than management being involved (articles #1 and #4) teams end up working with Subject Matter Experts (SMEs) who understand the technical requirements, but usually lack the authority of a management champion.
The most common reason projects fail:
In my experience, more projects die due to champion failure than any other reason. The champion for a project gets promoted or resigns. Champions can also become absentees, there on the org charts, but viewing the project as simply one of several in his or her portfolio.
This particular problem is not limited to Agile projects of course, but in many respects such projects are more vulnerable to management shifts, because they require more involvement by management in determining direction.
The Studio approach differs dramatically from the Agile in this respect. For starters, the emphasis changes from one where you have undifferentiated teams that each work on one component to one where you have differentiated teams that are each working on one aspect of a given product that will go through all the teams.
A movie studio will typically have several films in production at once, staggered in such a way as to ensure that a team is available to do filming on movie B even as movie A moves into post-production. The producer, in this case, is the champion, working to see that the director (the author) has the funds necessary to see the film to completion.
The producer here is not going to generally spend time working with the various teams. That's not his mandate. Instead, the producer is directly invested in the director achieving her visualization in a successful manner (i.e., in a manner that pays back the production costs and makes a tidy profit).
Recommendation: Organize your teams along a continuous production model.
Organize your teams along a continuous production model, in which each team works on one aspect of a project while other teams work on different aspects of other projects.
The champion and author then both focus upon one project at a time, working with different teams as they move through the process. As technical expertise shifts from general programmers to SMEs with deep technical knowledge, this process only becomes an easier and more natural fit and works better when dealing with distributed workforces.
The success of the Agile working "brand" has led it to being applied increasingly to situations where it is not all that applicable. This has become more apparent as IT workflows shift away from software production and towards intellectual property and knowledge processing in which specialized teams are working in a staggered, continuous development pattern.
This model, which I’ve referred to as the Studio Model, emphasizes:
- the power of prototypes and proofs of concept,
- the importance of a single author who serves to establish the narrative that defines the product,
- and the role of continuous integration through a specialized team of architects.
Agile working may not be a high performing team’s best option if they want not only to survive, but to actually be competitive in the future of work.
My best advice is to make sure you invest the necessary time to continually refine your team's work methods and roles — and base your decisions on what actually works for your team, rather than simply what's trending.
In our 20-year study of the world's highest performing leaders and teams, we've discovered the exact ingredients for building high-performing teams. Get started with F4S for free today.
Kurt Cagle is a writer, data scientist and futurist focused on the intersection of computer technologies and society. He is the CTO of Semantic Data Group, a smart data company. He is currently developing a cloud-based knowledge base, to be publicly released in 2020.