When people ask about where to get started with DevOps, the common advice is to start with the fast moving digital systems that face off to customers - ‘systems of engagement’ over ‘systems of record’. As the main driver behind DevOps is delivery speed, there is a lot of sense in this. However, I am increasingly finding that DevOps thinking has a very strong case as applied to the older legacy systems. So much that perhaps we should be looking there first. These are the systems which are like oil tankers - old, heavily integrated, very hard to change, to test, and deploy. The technologies will be last generation or worse, making them inherently harder to work with. When they do need to change, they will operate on slow release cycles with lots of regression testing and big, weighty release processes. These are also the systems with the cost associated. Think big teams of niche skills, many testers involved in getting releases out of the door and those big teams of people working over weekends to get the code released into production. Perhaps there are license costs associated with old middleware or database platforms which the team would dearly love to move away from if only they could. Maybe they are running on last generation infrastructure that is not well virtualised. I'm sure many of us have worked on platforms like this. The business case for getting these systems operating more efficiently is huge. We might never need to get them into a state where we can release them multiple times per day, but we can reduce cycle times, dramatically reduce the effort it takes to keep the lights on, add a bit of agility into the legacy platforms and save significant cost. We view legacy as an iceberg of opportunity for DevOps!
DevOps & Continuous Delivery Do Not Imply Risk!
At the same time, these systems of record are obviously the ones that are critical to the business. Outages have significant operational and reputational risk - think accounting, payment, or fulfilment systems. Why does it make sense to go there first with an IT transformation like DevOps which is most often driven through 'the need for speed'? We view DevOps as very applicable to legacy technology because we wholeheartedly reject the idea that teams practicing DevOps & Continuous Delivery are working in a way which implies more risk. Closing the gap between developers and operations around these big complex systems reduces risk by removing key man dependencies. Automating releases and management processes reduces risk and adds rigour. Codifying infrastructure and configuration as code reduces risk and improves consistency. Moving a system towards into automated testing frameworks reduces risk and drives up quality too, as does enabling some form of canary releasing or better rollback or any other number of initiatives associated with a DevOps approach. If we accept that DevOps practices can reduce risk, drive up quality, drive efficiencies and take cost out of legacy platforms then we need to get serious about the scope of how DevOps can be applied to legacy applications.
Take A Business Case Driven Approach
What you need to do in these systems is take an extremely business case driven approach. Getting these systems under automation or changing long established processes under legacy systems can require a fairly substantial investment, but the rewards for doing this can be huge with payback in a short space of time, particularly if we find enough low hanging fruit during the discovery phase. If we can carefully quantify investment required vs likely payback, you may find that it is worth starting DevOps initiatives in the most unlikely of places! A question to take away: What percentage of IT spend goes on legacy or currently operational systems over more modern strategic initiatives and yet we do not look to apply modern best practices to them? If we were to apply some of these practices within the legacy estate, how could we re-apply the resources that go into keep the lights on back into more strategic IT initiatives?