DevOps in a Kubernetes world

DevOps in a Kubernetes.png

At All Day DevOps 2019, Martin Alfke delivered the talk “DevOps in a Containerized World.” The Puppet specialist and trainer discussed how DevOps culture has evolved with the rise of new infrastructural elements. “How do we do DevOps in a containerized world?”, “Is there still collaboration possible?”, and “Are containers the DevOps killer?” are some of the questions that the session answered.

The Puppet specialist started the presentation by going through the classical DevOps principles and explaining the need for containers Alfke bridged the two together by diving into how DevOps has changed a result of containers. Overall, Alfke concluded that DevOps is not dead and that it is a steep learning curve for everyone

The key takeaways from the presentation were, when adopting containers:

  1. Find tools which integrate properly with your infrastructure 

  2. Don’t try a lift and shift approach

  3. Prevent Not Invented Here Syndrome, or in other words, stop doing things and take a look at what’s already there (like outsourcing to stack.io!)

At stack.io, here are a few things we’ve noticed about DevOps in a containerized Kubernetes world.

The underlying DevOps job is still the same 

When running an app on Kubernetes, the basic responsibilities of a DevOps engineer are the same, only the execution is different. 

For instance, you would still schedule periodic tasks using cron jobs. Whether you’re on a plain-old Linux VM or a Kubernetes cluster, the purpose and process of setting up cron jobs remains the same. At the end of the day, it’s the same task with the same name, only the exact procedure has changed (Kubernetes has a special CronJob resource definition, while Linux uses the crontab command).

It makes it easier to implement best practices

One of the reasons Kubernetes can be done by one person is that there is usually one tried-and-tested way to do each task that encourages best practices.

For example, when it comes to security, DevOps engineers will most often use a third-party or open-source secrets management system when deploying applications. This offers significantly better security over bad practices like keeping secrets in plain text or in version control repositories.

Some well-known of these third-party secrets management applications are: AWS Secrets Manager, Cloud Key Management Service, and Microsoft Azure Key Vault, Ansible Vault, and HashiCorp Vault. Although these tools have more functionality than Kubernetes’ native secrets management, it takes time to learn and implement them; you may have to spend a significant amount of time learning how to set up, secure, configure, and backup each tool in order to reap its full benefits. Kubernetes Secrets can be used straight out-of-the-box, so you can be confident that with the native solution alone, you’re already implementing security best practices. 

It gives you an opportunity to fix things

Every set of old infrastructure will have a few components that are out-of-date, not up to industry standards, or that you wish you could have done differently. When porting over to containers and Kubernetes, you have the opportunity to rectify these mistakes and create a robust and scalable infrastructure that works the way you want it to and meets the standards you set.  

Containers, in particular, make it easier for you to integrate your infrastructure with most new tools, making it a much lower risk to try out a new tool that may not work for you. The time it takes to learn, setup, and try new tools is often a major limiting factor for many developers. With tools like Helm charts and Operators, you can quickly install and demo a new application. Instead of taking months to figure out if a new technology is a good fit for your team, you may be able to have an answer in less than a week!

Do you need help containerizing your app? Let us know.