Intro

This guide discusses best practices for deploying Houston in your Kubernetes cluster in order to collect, route, and move faster with your code.

Prerequisites

This guide assumes you’ve looked over our Kubernetes Guide for Houston, and have a Kubernetes environment configured. You may also find our Customizing tbncollect guide useful as you configure your environment.

CI/CD to Houston

Any CI or CD tool works well with Houston. Ultimately, CD infrastructure should set up new pods with these labels, and Houston takes it from there. If you'd like to see how Houston works with CircleCI, as an example, check out this guide.

Also, be sure to read the reasons to decouple release from deployment to see how well CI/CD and Houston pair.

Proxy

We recommend a single ingress tbnproxy deployment per Kubernetes cluster, with three or more replicas. While one proxy replica is ok, three is N+2 redundant, allowing for sufficient capacity and greater resilience during individual node failure.

Exposing Your Proxy

To expose tbnproxy to the Internet, run the following, which should work in most cloud providers:

kubectl expose deployment --type=LoadBalancer <foo>

Zones

A Zone corresponds to a single routable IP-space. In the case of Kubernetes, Zones typically map to a full cluster, though they can also map to a single namespace. A collector is needed for each Zone.

Collector

We recommend a single replica of the collector to minimize the resource footprint. It's fine to run more than one replica in the deployment, but because brief outages during pod or node restart will not meaningfully affect system performance, there's no particular advantage.

Labels

In order to determine a pod's association with a service, you can use the env var: TBNCOLLECT_KUBERNETES_CLUSTER_LABEL=<foo> in your yaml configuration at spec.template.labels (without an override the default value is tbn_cluster). The collector will collect any cluster data with that label, and ignore the rest of your un-labeled, or differently-labeled clusters.

There are good reasons to use a different cluster label than a currently in-use label in your Kubernetes environment, as you may not want tbncollect to collect every pod. See this guide for more information on changing this label, as well as other collection and query changes that apply to using Houston with Kubernetes.

In addition to a cluster label, the following labels are required on each pod to participate in the Houston release workflow:

  • stage represents the stage of development (eg dev or prod). Only prod will be collected for the release workflow. Located in spec.template.labels of your yaml config file.
  • version should map 1:1 to an identifier in your release tracking system, e.g., version control tag, branch, or SHA. Located in spec.template.labels of your yaml config file.

Here is an example file from the Kubernetes guide to illustrate these labels:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: examplepod
spec:
  replicas: 1
  template:
    metadata:
      labels:
        # This is the cluster label with the default value.
        tbn_cluster: <foo>
        # This is the stage label, set to prod in order to apply to the release
        # workflow.
        stage: prod
        # This is the version label set to a relevant item.
        version: <relevant_tag_branch_or_SHA>
    spec:
      containers:
      - image: <example_image>
        ports:
        - containerPort: 8080
          # This is the where you name your port, which should match the value
          # of `TBNCOLLECT_KUBERNETES_PORT_NAME`
          name: http
          protocol: TCP

Port labels

A single port can be collected per pod, and that port's name must be the value of the TBNCOLLECT_KUBERNETES_PORT_NAME environment variable, which by default is http. All other ports are ignored. Located in spec.containers.ports of your yaml config file.

Conclusion

Now that you've seen what best practices look like for running Houston and using Kubernetes in your environment, you can try it with your own Kubernetes services. If you have questions or run into any trouble, please drop us a line, we're here to help.