Agile and continuous delivery

Agile and continuous delivery

Continuous delivery is one of a number of techniques in building software intended to reduce the time between working on a change and getting it into production. The name is explained by Martin Fowler as follows:

We call this Continuous Delivery because we are continuously running a deployment pipeline that tests if this software is in a state to be delivered.

These days, with the rise of automation of production (for instance, as part of a DevOps way of working), that pipeline will not only test software but will also build the artefacts used to later deploy the software to production.

Most of your processes still work

If you think about what it means for a piece of work — say adding a ‘logout’ feature to a website — to be completed, you’ll likely come up with something like the following aspects:

  • feature has interaction, visual &c design completed

  • code is written, including automated tests for the new feature

  • user documentation is written (including any release notes required)

  • test suite passes

  • sign off acquired (both for the code — via any code review process — and for the feature itself, signed off by the product manager or similar)

Scrum calls this a ‘definition of done’, defined as:

Definition of done: a shared understanding of expectations that software must live up to in order to be releasable into production.

This isn’t any different under continuous delivery, but because we have an automated delivery pipeline building our deployment artefacts, this will need to be included. For instance, if you use Docker to deploy and run your software, your delivery pipeline will generate Docker images, and upload them to an image repository, ready to be used in production.

However there can be some challenges in blending continuous delivery with an agile process.

Everything downstream of your merge should be automated

Say a software engineer on your team, Ashwini, has picked up some work, written the code and tests and wants to move forward. If the work comes back some days later with issues, then it will pull them away from whatever they’ve moved onto in the meantime. We want to avoid that.

A common process is for a software engineer to do some work, be actively involved in code review, and then for the code to be merged. At this point the delivery pipeline can build deployment artefacts, run automated tests and finally mark this work as ready to deploy. Unless the tests fail, there shouldn’t be any way that work moving through the delivery pipeline can revert to Ashwini.

However as well as code review, there are usually some human sign offs that are needed. For instance there may be some manual testing that either cannot be or has not yet been automated. The product manager is likely to want to sign off on work before it is allowed to be made live.

Ideally you want those sign offs to come before merge. Since code review should generally not throw back huge changes (assuming everyone knows what they’re doing and the team is working well together), you can often get product owner signoff first, then have any manual testing processes run in parallel to (and perhaps in collaboratoin with) code review.

I’ve seen teams get product owner signoff by having engineers do ad hoc demos at their desk. Often QA engineers will drop in as well to give immediate feedback from their point of view. For a large piece of work you may want to do this multiple times as the engineer gradually works through everything they have to do.

This can fall apart if the manual testing takes too long, which is another good reason to automate as much testing as possible. If a QA engineer on the team can spend their time on a particular piece of work in writing automated acceptance tests rather than doing manual testing, then it can often be done alongside the software engineer’s work, probably with them directly collaborating.

App stores

You can’t avoid downstream manual work with app stores, because it’s beyond your control. Your deployment artefact is an app which you submit to the store, following which there is often some sort of approval process. To make matters worse, of course, the length of time taken to give that approval, or not, is often unpredictable and can be long compared to your own iteration cycle. The combination of a long pipeline length and late issues causing reversions means that you’ll have to build more defense in your process for out-of-band work.

Your increment cycle is now shorter than your iteration cycle

A lot of agile teams set their increment (how often they release) the same as their iteration (how often they plan). Some have increments longer than their iteration, with the product manager signing off work in an iteration but it not going live immediately.

With continuous delivery, you have the ability to release pieces of work as soon as they’re done. Although you may choose to wrap them up into larger increments, it’s also common to release one or more increments per day. That’s very different from a once-per-iteration release, and the processes you have around for instance notifying users may need to be rethought.

There are some events that typically happen once per iteration which may no longer make sense. For instance, Scrum teams often have a showcase of the work they’ve done in an iteration. This may not make sense if most of the work has already been live for several days, although some teams like to celebrate the work they’ve done across the iteration, and may not want to lose that.

It’s also important not to lose sight of the important events that should continue to happen at the pace of the iteration. A team retrospective, where the team gets to work on and improve its own processes and systems, still needs to happen on a regular basis.

Similarly, most teams do some aspects of future planning on an iteration cycle, through planning meetings, backlog grooming and so forth.

Going further

An important part of making agile processes work is to have a self-sufficient team: not just developers and maybe a product manager, but also the designers, QA engineers and so on that work with them. This should also include the operations engineers responsible for production, which is one aspect of DevOps.

However for operations engineers, a feature is never “done” until it’s shut down. In a Devops and agile way of thinking, that means that engineers, designers and so forth should also consider a feature to be “in play” while it’s live. This can result in some interesting challenges to more rigid adoptions of agile.

If a feature is never done, then it should have regular care and feeding scheduled. This work should be tracked, just like any other, which opens all sorts of questions about how features map to pieces of work (tickets, stories or whatever) in your work tracker.

Further, just because you’re looking at it regularly doesn’t mean that it will require the same amount of effort each iteration. In planning, you will need to decide how much time to spend on each live feature. Having a longer-term view (based around the features) can help here, so you know in advance when a particular iteration is going to have larger amounts of effort devoted to “maintenance” work.