Skip to content
Home » How to Properly Deploy Linkerd Service Mesh to Kubernetes

How to Properly Deploy Linkerd Service Mesh to Kubernetes

This article covers the required steps to properly deploy Linkerd service mesh to Kubernetes. This route to deploying the Linkerd Service Mesh to your Kubernetes cluster is valid for your production or testing environment.

Why Deploy a Service Mesh to Kubernetes?

Containerized cloud native applications are often built with a distributed microservices architecture. Kubernetes is an increasingly popular choice and now considered an industry standard for orchestration of these containerized applications.

With the continuous addition of microservices in this distributed architecture comes challenges within a Kubernetes cluster. These challenges are mainly related to authentication and authorization between microservices, load balancing, and encryption.

A Kubernetes service mesh is a dedicated infrastructure layer designed to manage, observe, and control communication between microservices within a Kubernetes cluster. The main goal of a service mesh is to improve overall reliability, security, and observability of the microservices that make up a complex, distributed application.

Linkerd Service Mesh

Linkerd is an open source service mesh heavily used in production Kubernetes environments. Compared to any other available service mesh right now, Linkerd is known to be faster and smaller. Considering that a Kubernetes service mesh works by deploying a sidecar container next to your own containers in the same pod, being faster and smaller are two strong selling points.

The Linkerd service mesh adds observability, reliability, and security to Kubernetes applications without code changes. For example, Linkerd can monitor and report per-service success rates and latencies, can automatically retry failed requests, and can encrypt and validate connections between services. What’s crazy is that all of these examples can be done without requiring any modification of the application itself!

How Does The Linkerd Service Mesh Work?

In your Kubernetes cluster, the Linkerd service mesh works by installing a set of ultralight, transparent “micro-proxies” next to each microservice instance as a sidecar container. These proxies automatically handle all traffic to and from the microservice.

The Linkerd service mesh has two basic components: a control plane and a data plane. Once Linkerd’s control plane has been deployed to your Kubernetes cluster, you add the data plane to your microservices (called “meshing” or “injecting” your microservices) and voila! Linkerd’s service mesh magic happens.

How to Deploy Linkerd Service Mesh to Kubernetes

Prerequisites
Step 1: Add The Linkerd Helm Repo

If you don’t already have it, the Linkerd Helm repo is required to pull the most up to date version of Linkerd’s Helm chart.

helm repo add linkerd https://helm.linkerd.io/stable

Step 2: Deploy Linkerd’s CustomResourceDefinitions

CustomResourceDefinitions are extensions of the Kubernetes API that is not necessarily available in a default Kubernetes installation. These CRDs are required to help Linkerd operate in our Kubernetes cluster. What this command will also do is create a new namespace in our Kubernetes cluster called “linkerd” to house all of our Linkerd services.

helm install linkerd-crds linkerd/linkerd-crds -n linkerd --create-namespace

Step 3: Create and Deploy a Long-lasting Root Certificate Via Step For Automatic Issuer Certificate Rotation

Linkerd’s automatic mTLS feature generates TLS certificates for the proxies that are deployed alongside our microservices and automatically rotates them without user intervention. These certificates are derived from a trust anchor, which is shared across clusters, and an issuer certificate, which is specific to the cluster.

While Linkerd automatically rotates the per-proxy TLS certificates, it does not rotate the issuer certificate. To avoid the huge headache of dealing with an expired issuer certificate in production, we can implement auto rotation.

We need two things to make this happen, cert-manager and step.

Create a long-lasting root certificate via Step and have it expire many years from now.

step certificate create root.linkerd.cluster.local ca.crt ca.key --profile root-ca --no-password --insecure --not-after="2034-01-14T16:00:00+00:00"

Now with those two files (ca.crt and ca.key), create a Kubernetes Secret.

kubectl create secret tls \
    linkerd-trust-anchor \
    --cert=ca.crt \
    --key=ca.key \
    --namespace=linkerd

Next create an Issuer and Certificate that will reference the Secret to automatically generate and rotate new Linkerd issuer certifications every 48 hours.

kubectl apply -f - <<EOF
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
  name: linkerd-trust-anchor
  namespace: linkerd
spec:
  ca:
    secretName: linkerd-trust-anchor
EOF


kubectl apply -f - <<EOF
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: linkerd-identity-issuer
  namespace: linkerd
spec:
  secretName: linkerd-identity-issuer
  duration: 48h
  renewBefore: 25h
  issuerRef:
    name: linkerd-trust-anchor
    kind: Issuer
  commonName: identity.linkerd.cluster.local
  dnsNames:
  - identity.linkerd.cluster.local
  isCA: true
  privateKey:
    algorithm: ECDSA
  usages:
  - cert sign
  - crl sign
  - server auth
  - client auth
EOF

Step 4: Deploy The Linkerd Control Plane

With the root certificate (almost) never expiring and the issuer certificates on an automatic rotation, deploy the Linkerd control plane with Helm.

helm install linkerd-control-plane -n linkerd \
  --set-file identityTrustAnchorsPEM=ca.crt \
  --set identity.issuer.scheme=kubernetes.io/tls \
  linkerd/linkerd-control-plane

Make sure to execute this command in the same working directory as the ca.crt file that you generated previously.

It’s important to validate that everything is fully operational with the following command. This will give you insights on any issues with your Linkerd control plane.

linkerd check

Step 5: Deploy Linkerd Viz

After all of the Linkerd pods are successfully spun up in the Linkerd namespace, proceed with the Linkerd Viz deployment with Helm. Linkerd Viz is a monitoring application based on Prometheus and Grafana, auto-configured to collect metrics from Linkerd. This command will also create another namespace named “linkerd-viz”.

helm install linkerd-viz -n linkerd-viz --create-namespaces linkerd/linkerd-viz

Step 6: Linkerd Injection

Upon completing your Linkerd deployment, you’ll soon realize that nothing about the cluster changed and the microservices are the same. As previously mentioned, your services need to be added to the services mesh manually by injecting Linkerd’s data plane proxy into their pods as a sidecar container.

Here is a one-liner that will get the YAML manifest for an existing Kubernetes Deployment, inject Linkerd’s proxy (which is only one line to the Deployment’s annotations), and apply the updated YAML.

kubectl get deployment [DEPLOYMENT-NAME] -o yaml | linkerd inject - | kubectl apply -f -

The Pods managed by the Kubernetes Deployment will then restart with an additional container. That additional container is the Linkerd proxy.

Step 7: Validate in The Linkerd Viz Dashboard

Linkerd Viz provides a very useful dashboard for you to gather information on your services now in the Linkerd service mesh. If you run the following command, the Linkerd Viz dashboard will pop up in your web browser and you can explore all of the glory that the Linkerd service mesh provides!

linkerd viz dashboard

Wrapping Up

In this article we walked through the steps on how to properly deploy the Linkerd service mesh to Kubernetes. When the Linkerd service mesh is deployed this way, it can run for years without encountering any issues on the control plane. Congratulations on implementing better observability, reliability, and security into your Kubernetes cluster!

Thank you so much for reading and I hope you learned something new! If you have any questions or suggestions, please add them in the comments section below.

Leave a Reply

Your email address will not be published. Required fields are marked *