Last updated at Fri, 03 Nov 2017 19:17:40 GMT

Customer demands aren’t the only thing pushing development and operations teams into more frequent software releases. It is also the need for quicker feedback on product quality, the desire to reduce bottlenecks in operations teams, and the goal to utilize less overhead on projects.

The concepts and spirit of continuous delivery are well known. However, organizations with existing applications starting to implement continuous delivery still have a lot to consider.

Making sure that continuous delivery helps — not hurts your organization — is key to long-term success.

Faster features, quicker feedback, less snags, and more efficient teams are all good drivers to adopting continuous delivery (CD) and integration (CI).

Unless your organization was born with continuous delivery, making the move can be tricky. You have a few choices:

  1. Slip stream Continuous Integration or Continuous Delivery into the current delivery pipeline. Sometimes, introducing CI into your development processes is not too much of a challenge; with the hardest part being the construction of a flexible integration lab. Because you can keep your existing waterfall release process, it looks and feels a lot like a more flexible UAT environment. Continuous delivery is different however; inserting continuous delivery into existing processes can be complicated. It’s obviously much different, and can create timing issues with roadmaps and previous releases.
  2. Start with a parallel development and operations group that is built bottom up for continuous delivery. Usually this requires a compelling event, like a new major version of an application, or an on-premise client application that is moving to a web application for it to make sense. The cost of another group is high, but it is an ideal opportunity to leverage all DevOps has to offer.
  3. Re-building the existing development process from scratch. While this is the best way to keep team structure intact (compared to option two), it requires you to allocate significant time when you’re not releasing, and has a high failure rate.

What’s the difference between Continuous Integration and Continuous Delivery?

Although continuous integration and continuous delivery are sometimes used synonymously, they are actually separate activities with nearly identical tooling and processes. Continuous integration can be leveraged by almost any organization, even those stuck in waterfall-based projects, simply because the risk is much lower. However, continuous delivery impacts all users, and production infrastructure.

Now, lets explore continuous delivery more.

In the spirit of DevOps, the first and second option above make the most sense.

Option one allows you to start learning and leveraging the results-driven culture without having a large impact on current production releases. This enables the team to learn about the processes and speed at which things happen, while maintaining the comfort level they’re used to. The downside is it might encourage teams to stick with old habits and, eventually, if not pushed forward, encourage reverting back to old ways.

Option two is ideal if there is a compelling event, staff, and budget to support it. This option allows you to leverage all DevOps tools right away, without much concern about integrating with existing components. The goal being that eventually everyone will move to the new processes after the support for the older waterfall-driven applications die off.

Several very large software companies have successfully leveraged this approach. Now, large (previously non-technical) organizations with a combination of line-of-business applications and mobile and web applications benefit from this approach tremendously.

Both of these options have their pros and (more importantly) their cons. Start-ups designed from scratch with continuous delivery may not have addressed them, but you can.

Continuous Delivery

There are 4 elements of Continuous Delivery that can pose serious challenges down the road.

  1. Ability to revert
  2. Tying Infrastructure and Application Layers
  3. Bugs, bugs, and more bugs
  4. Blind spots

1. Ability to revert:

“Delivery” is the most important part of the modern release pipeline. Its emphasis is on getting code to market faster and being results-driven.

Inevitably things will break (if they don’t, your team is not moving at the appropriate pace). Because things will break, reverting releases to previous versions is a very important component of the process. But reverts are not always code; they can be configurations and even machines.

Building in great reverting mechanisms, which are fully tested against a series of previous releases, help teams know that their revert engine is there when they need it. It is wishful thinking to believe that release automation tools will do this for you; they won’t. You need to perform regression testing on your revert process, at least early on, because teams will often forget dependencies at first run.

Having the analytics to let teams know what configurations look like before, and after a release, is critical to stay aware of all the changes that are happening between versions; both in the application and infrastructure.

2. Tying Infrastructure and Application Layers

With the speed of releases and the speed at which frameworks, web servers, and backends that applications utilize are updating, it is critical that software releases be tied to their associated infrastructure.

A revert will get you back to a previously known good state, but it won’t fix the problem. Every organization will have a different way of doing this, but without the correlation, development and operations will play ping-pong with issues and their potential resolutions.

It’s a classic problem to have things ,such as new frameworks and patches, run in integration environments but not production. This is a catalyst for wide-spread issues. Without knowing the relationship of a release to infrastructure, a huge amount of time can be wasted trying to spot these issues and quickly starts looking like waterfall again.

3. Bugs, bugs, and more bugs

Let’s say a small bug makes it through one release, two releases, or maybe even three. This means code on top of bad code has been released. Unfortunately this happens a lot, however with the power of frequent releases and active feedback it will eventually get caught.

Catching the bug is not the problem.

The problem is understanding exactly where the bug is. Sometimes a new feature using buggy functionality might be operating exactly as intended, but what it was written on top is not. This is why having a strong system for performance testing, QA, and QE is as critical to continuous delivery as tools like Jenkins, and GO.

4. Blind spots

These bugs often occur because of blind spots in the environment and poor analytics. These are areas in the infrastructure and code that teams can’t always get a clear picture of. The issues end up surfacing in support tickets, complaining users, or potentially outages.

Blind spots should be avoided at all costs.

You achieve this by building in a culture of analytics first, and analytics everything, very early on. Making sure that operations and development teams know to produce analytics for all systems and applications, and where to push them. Leverage integrations of tools like Logentries with APM tools like New Relic to help gain the insights you need.

Continuous integration/continuous delivery give teams more flexibility and the ability to create better products. For teams with existing applications, it is also a fantastic opportunity to move in a new direction that keeps you competitive. The shift may not always be easy, but the rewards are well worth it. Taking into consideration the above aspects of continuous delivery that could potentially pose challenges down the road will help these teams implement continuous delivery without harm. Let us know what you think in the comments below.