Back to Blog

Eric Wright

How to Get Beyond Operational Challenges of DevOps - Part I

There are many variations on the definition of DevOps. The central theme of them all is that DevOps is used to increase the flow of value from idea to prototype, and from prototype to production. It’s a set of processes and behavioural patterns which change the way that development and operations (and hopefully security) teams communicate, interact, and move code through the application lifecycle.

Two common things often associated with the DevOps movement are Agile development and containerization as an infrastructure pattern.

Agile Helps (but is not) DevOps

Agile practices have been around for decades. This is seen in the change from waterfall project and software management to the more responsive, shorter sprints, and new ways of introducing features, prototyping, and testing. Adopting agile alone does not make a DevOps success story but is often used in teams who are gaining high-velocity and better flow from ideation to implementation.

Measuring regularly and consistently in the agile environment allows for better course correction in case of issues and higher visibility throughout the entire lifecycle. This lets teams move on to tackling the technology and tools that help to bring these practices to life in your infrastructure.

Containerization is one of the most popular new packaging methods to get applications deployed for portability and consistency. Docker opened the door to API-accessible containers which moved from the VM infrastructure to a much more thin, portable structure. The goal of containerization was to create consistency in the deployment and operations. Containers can run on multiple underlying platforms and infrastructure which frees you from the lock-in of proprietary hypervisors.

As the teams develop locally and deploy to containers through QA, User Test, Functional Test, and Production environments, the process can be consistent from start to finish. Sounds great, right? Operations team members now have to think about the next step of actually adopting and managing containers.

You’ve probably taken a look at how to bring containers into your environment as a deployment construct and realized that the container alone didn’t solve the problem. Using a container versus a VM simply moved the bottleneck to the next point which is deploying and managing containerization infrastructure. The dawn of container scheduling wars has already begun as the industry tries to tackle this.

The Cloud-Native Computing Foundation highlighted the top challenges with adopting and deploying containerized infrastructure in a recent survey [1]:

• Cultural Changes with Development Team (41%)
• Complexity (40% up from 35%)
• Lack of Training (40%)

These top three challenges aligns, not unsurprisingly, with what you are already hitting in your teams as you look to bring in more DevOps-ish infrastructure practices.

The Successful DevOps Adoption Kit

First, there is the reading list. There are dozens of books that try to tackle the subject of the transformative process in adopting DevOps practices. Top ones that you may see listed elsewhere would include:

• The Phoenix Project - Gene Kim, Kevin Behr, George Spafford
• The DevOps Handbook - Gene Kim, Jez Humble, Patrick Debois, John Willis
• Effective DevOps - Jennifer Davis, Katherine Daniels
• Site Reliability Engineering - Various Authors

One thing you will find with most of the DevOps guides is that they are predominantly aimed at developers. The development side of DevOps has a lot more care and feeding, so that makes sense. What about the operations teams?

As someone who has successfully implemented infrastructure operations processes that align with the DevOps methodology, I can tell you that two things stand out as the leading factors for successful adoption:

1. High Trust - this must span across culture (people), process (consistency and predictability), and technology (proven, trusted outcomes)
2. Broad use of automation - this also spans process (process and task automation) and technology (adaptive automation across the IT stack)

The interesting thing about these two items is that they feed each other. High trust comes from attaining consistency. Consistency comes from using automation in processes throughout the application lifecycle.

How can you have trust without consistency? You can’t. You may get lucky and find consistency almost accidentally, but manual processes are fragile and cannot scale. It won’t matter how good your build documents are when you have to stand up and tear down infrastructure and applications with more agile application deployments happening. This is something you already know from practice as you grew your environment. Why does it take 3 days to get a new physical server racked and available with a hypervisor? It used to take weeks so 3 days feels fast. Your development team wants it in 3 hours. How can you keep up, safely?

Infrastructure has its own lifecycle and it has its own set of challenges that the pre-DevOps development teams may not have known and understood. Your idea of performance and scaling and managing infrastructure is much different in the operations team. Even in DevOps shops, there is a need to stand up and support the underlying infrastructure to allow for these new processes to be successful.

How Container Scheduling Wars Launched a New Industry

The next article in the series covers how Kubernetes became the most widely used name in container scheduling and what your teams need to do as you move towards evaluating how to most effectively bring these new technologies into your environment. This will also highlight the first set challenges that are created by these new deployment patterns and why PaaS (Platform-as-a-Service) and CaaS (Containers-as-a-Service) could be your best friends in a DevOps world.

 

Check out this article!

New call-to-action