Over the years there’s been an explosion in infrastructure platforms and application frameworks that form the foundation of “cloud native.” Modern infrastructure platforms range from container orchestrators such as Kubernetes to serverless platforms aimed at rapid application development. In parallel, shell scripts that administrators used to deploy, configure, and manage these platforms evolved into what is now called Infrastructure as Code (IaC), which formalizes the use of higher level programming languages such as Python or Ruby or purpose-built languages such as HashiCorp’s HCL (through Terraform). 

Though IaC has been broadly adopted, it suffers from a major flaw: code does not provide a contract between the developer’s intent and runtime operation. Contracts are the foundation of a consistent, secure and high-velocity IT environment. But every time you modify or refactor code, you need to run validation tools to determine its intent.

Which begs the question, why are admins using programming languages in the first place? Why is all of this so complicated? In many ways it’s an attempt to automate the unknown, the unpredictable. But by nature, most infrastructure is loosely defined and requires baling wire and duct tape to stick things together in ways that mimic what a system administrator would do when logged into a server.

Furthermore, while provisioning infrastructure is important, IT practitioners also need to deploy and manage both infrastructure and applications from day two onwards in order to maintain proper operations. Ideally, you could use the same configuration management tools to deploy and manage both your infrastructure and applications holistically. 

The Kubernetes way

Things are different with Kubernetes…

Instead of taking an imperative or procedural approach, Kubernetes relies on the notion of Configuration as Data, taking a declarative approach to deploying and managing cloud infrastructure as well as applications. You declare your desired state without specifying the precise actions or steps for how to achieve it. Every Kubernetes resource instance is defined by Configuration as Data expressed in YAML and JSON files. Creating a Deployment? Defining a Service? Setting a policy? It’s all Configuration as Data, and Kubernetes users have been in on the secret for the past six years. 

Want to see what we mean? Here’s a simple Kubernetes example…

In just 10 lines of YAML, you can define a Service with a unique version of your application, set up the network to create a route, ingress, Service, and load balancer, and automatically scale up and down based on traffic. 

How does Configuration as Data work? Within the Kubernetes API Server are a set of controllers that are responsible for ensuring the live infrastructure state matches the declarative state that you express. For example, the Kubernetes service controller might ensure that a load balancer and Service proxy are created, that the corresponding Pods are connected to the proxy, and all necessary configuration is set up and maintained to achieve your declared intent. The controller maintains that configured state forever, until you explicitly update or delete that desired state.

What’s less well known is that the Kubernetes Resource Model (KRM) that powers containerized applications can manage non-Kubernetes resources including other infrastructure, platform, and application services. For example, you can use the Kubernetes Resource Model to deploy and manage cloud databases, storage buckets, networks, and much more. Some Google Cloud customers also manage their applications and services using Kubernetes controllers that they developed in-house with open-source tools.

How do you start leveraging the KRM for managing Google Cloud resources? Last year, Google Cloud released Config Connector, which provides built-in controllers for Google Cloud resources. Config Connector lets you manage your Google Cloud infrastructure the same way you manage your Kubernetes applications—by defining your infrastructure configurations as data—reducing the complexity and cognitive load for your entire team.

Following our service example above, let’s say we want to deploy a Google Cloud Redis instance as a backing memory store for our service. We can use KRM by creating a simple YAML representation that is consistent with the rest of our application:

We can create the Redis instance via KRM and Config Connector:

Where CaD meets IaC

Does that mean you no longer need traditional IaC tools like Terraform? Not necessarily. There will always be a need to orchestrate configuration between systems, for example, collecting service IPs and updating external DNS sources. That’s where those tools come in. The benefit when managing Google Cloud resources with Config Connector is that the contract will be much stronger. This model also offers a better integration story and cleanly separates the responsibility for configuring a resource and managing it. Here’s an example with Terraform:

Terraform is used to provision a Google Cloud network named “demo_network” via the Terraform provider for Google and to create a Google Cloud Redis instance connected to it via the Terraform Kubernetes provider and KRM. On the surface, the contract between Terraform and the two providers looks the same, but beneath the surface lies a different story.

The Terraform provider for Google calls the Google Cloud APIs directly to create the networking resources. If you wanted to use another configuration tool you would need to create a new set of Google Cloud API integrations. Furthermore, you will jump back and forth between Kubernetes and Terraform to view resources created separately in each interface.

On the other hand, the Kubernetes provider is backed by a controller running in Kubernetes that presents a KRM interface for configuring Redis instances. Once Terraform submits configuration in the form of data to a Kubernetes API server, the resource is created and is actively managed by Kubernetes. Configuration as Data establishes a strong contract between tools and interfaces for consistent results. You’re able to remain in the Kubernetes interface to manage resources and applications together. The Kubernetes API server continuously reconciles the live Google Cloud state with the desired state you established in Terraform with KRM. Configuration as Data complements Terraform with consistency between Terraform executions that may be hours, days, or weeks apart.

To make a long story short, Configuration as Data is an exciting approach to infrastructure and app management that enables fluid interaction between native resources and configuration tools like IaC and command lines.  It’s also an area that’s moving quickly. Stay tuned for more about Configuration as Data coming soon. In the meantime, try Config Connector with your Google Cloud projects and share your feedback about what you did, what worked, and what new features you’d like to see.

Related Article

Unify Kubernetes and GCP resources for simpler and faster deployments

The new Config Connector lets you manage GCP resources as if they were in Kubernetes.

Read Article


Source: Google Cloud Blog