Preparing to Conduct Your Kubernetes Orchestra in Tune with Your Goals (Part 1)

Topics in this article

Those of you following the container industry have most likely heard the term Kubernetes. This open-source container-orchestration system for automating deployment, scaling, and management of containerized applications is the catalyst in the creation of many new business ventures, startups, and open source projects. It’s safe to say that Kubernetes (K8s) has won the container orchestration battle. And that is why the major players in the industry are building tools and platforms to support Kubernetes, even though it can be daunting to get started.

There are many factors for a legacy IT shop to take into consideration while building a Kubernetes solution, which can be characterized into two areas: platform and application. This blog covers platform considerations and application considerations will be addressed in Part 2.

Platform Architecture Considerations for Kubernetes

The architecture of the platform is paramount to success and consists of several different components, such as: compute, network, storage, sizing, security and integration, all of which I’ll discuss below. The architecture must also take into account various non-functional requirements that allow the platform to be resilient and operate for the needs of the organization and applications that will reside upon it.

Compute – How do I define my server architecture?

Many big questions are asked around compute, including:

  1. Do I go bare-metal or virtual?
  2. Do I use a public cloud provider or on-premises solution?
  3. Do I utilize a Converged or Hyperconverged hardware stack or go full 3-tier and build my own hardware stack?

There is no single correct answer to these questions because different requirements require different solutions. Among other things, it is necessary to consider operating models, organizational readiness, cost models, and maturity of the organization. The answer to “How do you define your server architecture?” is always, “It depends”.

For example, a company may consider bare-metal provisioning of their production clusters if they’re able to build bare-metal servers quickly and have the rigor to do so, as it should provide In addition, other factors may require building additional clusters quickly – for testing or sandbox purposes – which will lead more towards virtualization as the solution since the workloads can be quickly cloned, and the size of the virtual machines can be configured for the proper utilization of the particular cluster.

Networking – How do containers talk to each other and to my other apps?

Common questions that arise around networking are:

  1. What networking plugin to use? What are the drawbacks to certain plugins?
  2. Which plugin fits into my overall networking strategy?
  3. What load balancer technology should I use, internal or external?
  4. How do I incorporate firewalls for the platform?

Networking is always a big concern because Kubernetes pods are ephemeral (non-persistent); it begs the question of how to maintain a stable, current view of what Kubernetes pod is instantiated and where it is located. In addition, you must take into account the security considerations with regards to networking, such as transport protocols, firewalls and DMZ segregation.

Storage – How do I persist my data in my containers?

Questions arise here, such as:

  1. Do I use direct attached Storage (DAS) or do I use an array of some sort?
  2. What are my IOPS requirements for my persistent data pods, how do I backup my data and persist it to another location?
  3. What storage plugin do I use?
  4. What storage protocol do I use, NFS, iSCSI, Block?

Kubernetes pods are by nature ephemeral, so you must externalize the data from the pod or else risk losing it. Therefore, you must plan for how you’ll store persistent data within a Kubernetes cluster. Traditional methods of backup, recovery, and provisioning are not able to keep the pace of change necessary for container-based eco-systems, in which case we need to automate and engineer the appropriate solution.

Sizing – How big should the cluster(s) be?

Questions:

  1. Are smaller clusters better?
  2. Have cloud providers virtually eliminated the management cost for creating new clusters even if multi-cluster governance is missing?

The intent of Kubernetes is to allow many groups to use a common platform for their applications. As such, sizing needs to be done correctly, and adhere to proper fault zones for infrastructure. Sizing decisions should also factor in the growth of the environment, and how capacity will be added in the future. This leads directly to monitoring of the environment for capacity planning purposes and understanding the importance of establishing a good onboarding process for the environment.

Security – How do I make sure I don’t get hacked?

Your platform must be secured against internal and external threats. Some questions that might arise are:

  1. Where should my cluster be located within the data center?
  2. How do I provide DMZ access to my applications without affecting other workloads?
  3. How do I cordon off each workload so potential intrusion doesn’t impact other workloads within the data center?
  4. What regulatory frameworks are in effect for my cluster(s)?
  5. What is my authentication/authorization policy? Access to the platform should be restricted; how can I best apply role-based access controls (RBAC) for the different personas that will access the cluster?
  6. How do I handle segregation of duties and segregation of environments within the platform?

Integration – How do I operate this platform and weave it into existing operations?

You’ll integrate the various components of the platform into your existing monitoring, logging, authentication, and authorization strategies.

Integration is essential. Without integrating into the existing framework, operations teams usually won’t fully adopt the platform, since a new system can be time consuming to learn or operate. In addition, many organizations spend a lot of money developing processes and procedures around their existing monitoring, logging, and security frameworks.

Summary

You need a conductor to help keep your orchestra in tune.

There are many platform considerations when designing and implementing Kubernetes platforms. It can be a daunting task – especially when getting started, and with the array of technologies and integration points into your existing operations.

Dell Technologies Consulting teams understand the physical infrastructure as well as the complex logical layers involved for engineering a proper Kubernetes solution. Our team of experts focused on application transformation can facilitate your successful Kubernetes journey and help architect, design and implement the right solution for your organization’s unique needs. To learn more, contact your Dell Technologies Services representative today.

Stay tuned for Part 2 of this blog where I’ll address specifics around how Kubernetes change the application development lifecycle and how best to incorporate DevOps principles into your organization so that you can get the full value of your Kubernetes investment.

About the Author: Daniel Murray

Daniel Murray has been with Dell Technologies for 10 years and leads Consulting Services for DevOps and Cloud Applications within the Application Transformation practice. He has over 20 years of consulting experience, leading projects across various industries, and serving in multiple practice areas including software, data, and infrastructure architectures in both private and public clouds. Daniel is passionate about helping his customers achieve maximum value through DevOps by educating them how to transform their technical and business practices through the combination of technology, process, and people. He expertly guides customer engagements to completion by applying his deep knowledge, experience, and demonstrated ability to establish and maintain working relationships across disciplines, industries, and organizations. Daniel received his Bachelor of Science in Chemical Engineering from Georgia Tech. In his free time, he enjoys watching soccer and scouting. Daniel resides in Atlanta, GA with his lovely wife and their three kids.
Topics in this article