My CoreOS / Kubernetes Journey: etcd

I need to learn and deploy Kubernetes quickly. I have a good bit of experience with Docker and docker-compose, and have read of the wonders of Kubernetes and of course watched Kelsey Hightower's kubernetes videos. But I have never actually used it. So here is how I did that, part the second.

Certificate Authority

Right off the bat, I have private IP addresses that are not truly private; they are shared by everyone in Linode's Atlanta Datacenter. I could set up the firewall to only accept connections from my other nodes, and I may indeed add that in later. For now I want all of my internal traffic to be encrypted, so we are going to need a Certificate Authority.

I spent the better part of a day fighting with OpenSSL and etcd. I kind of assumed that, much like in SAML2, the identity of the certificate would not matter; only the validation of the signature by the CA certificate. Boy was I ever wrong. etcd cares where your connections come from and that the remote hostname or IP address indeed matches what is set in the Common Name or Subject Alternative Name fields of the certificate.

I prefer to use old, tried-and-true tools as much as possible. But I only once have ever done Subject Alternative Name with OpenSSL, and the official guide shows how to use CloudFlare's cfssl to do etcd certificates, so that's what I did for now.

# TODO: all of this with OpenSSL instead

The official guide is a better howto than this page and you should go read it for this step. Since I followed the guide closely, this page merely consists of the contents of my config files and keys. LOL, no, I did not actually post my keys, j/k.

ca-config.json

{
    "signing": {
        "default": {
            "expiry": "168h"
        },
        "profiles": {
            "server": {
                "expiry": "43800h",
                "usages": [
                    "signing",
                    "key encipherment",
                    "server auth"
                ]
            },
            "peer": {
                "expiry": "43800h",
                "usages": [
                    "signing",
                    "key encipherment",
                    "server auth",
                    "client auth"
                ]
            },
            "client": {
                "expiry": "43800h",
                "usages": [
                    "signing",
                    "key encipherment",
                    "client auth"
                ]
            }
        }
    }
}

ca-csr.json

{
    "CN": "Jeff Tickle Labs Certificate Authority",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "US",
            "ST": "NC",
            "L": "Todd",
            "O": "jefftickle.com",
            "OU": "Research and Development"
        }
    ]
}

kube0.json

{
    "CN": "kube0.jefftickle.com",
    "hosts": [
        "192.168.227.219",
        "kube0.jefftickle.com",
        "kube0.local",
        "kube0"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "US",
            "ST": "NC",
            "L": "Todd",
            "O": "jefftickle.com",
            "OU": "Research and Development"
        }
    ]
}

kube1.json

{
    "CN": "kube1.jefftickle.com",
    "hosts": [
        "192.168.228.91",
        "kube1.jefftickle.com",
        "kube1.local",
        "kube1"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "US",
            "ST": "NC",
            "L": "Todd",
            "O": "jefftickle.com",
            "OU": "Research and Development"
        }
    ]
}

kube2.json

{
    "CN": "kube2.jefftickle.com",
    "hosts": [
        "192.168.225.159",
        "kube2.jefftickle.com",
        "kube2.local",
        "kube2"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "US",
            "ST": "NC",
            "L": "Todd",
            "O": "jefftickle.com",
            "OU": "Research and Development"
        }
    ]
}

Manually Configure CoreOS for etcd

Normally, an Ignition Config would set up the systemd scripts for your etcd. We need to add those manually since we can't do Ignition.

Here's a template for creating the systemd script using envsubst:

[Service]
ExecStart=/usr/lib/coreos/etcd-wrapper \
  --name="kube${N}" \
  --initial-advertise-peer-urls="https://${MY_IP}:2380" \
  --listen-peer-urls="https://${MY_IP}:2380" \
  --listen-client-urls="https://${MY_IP}:2379,https://127.0.0.1:2379" \
  --advertise-client-urls="https://${MY_IP}:2379" \
  --initial-cluster-token="etcd-jefftickle-${N}" \
  --initial-cluster="kube0=https://192.168.227.219:2380,kube1=https://192.168.228.91:2380,kube2=https://192.168.225.15:2380"
  --initial-cluster-state="new" \
  --client-cert-auth \
  --trusted-ca-file="/home/core/ca-etcd.pem" \
  --cert-file="/home/core/kube${N}-etcd.crt" \
  --key-file="/home/core/kube${N}-etcd.key" \
  --peer-client-cert-auth
  --peer-trusted-ca-file="/home/core/ca-etcd.crt" \
  --peer-cert-file="/home/core/kube${N}-etcd.crt" \
  --peer-key-file="/home/core/kube${N}-etcd.key"

Here is a script that generates systemd scripts:

#!/usr/bin/env bash

export KUBE0_IP=192.168.227.219
export KUBE1_IP=192.168.228.91
export KUBE2_IP=192.168.225.159
export MIN=0
export MAX=2

for N in $(seq $MIN $MAX); do
  REF=KUBE${N}_IP
  export N
  export MY_IP=${!REF}
  cat 20-clct-etcd-member.conf | envsubst > kube${N}-20-clct-etcd-member.conf
done

Putting It All Together

Because in reality I had the systemd part figured out long before I had the Certificate Authority part figured out, I had to rename all of my certificates along these lines:

cfssl name renamed to
ca.pem ca-etcd.crt
kube0.pem kube0-etcd.crt
kube0-key.pem kube0-etcd.key
kube1.pem kube1-etcd.crt
kube1-key.pem kube1-etcd.key
kube2.pem kube2-etcd.crt
kube2-key.pem kube3-etcd.key

I may go back and clean this up later.

To copy this all up to the servers, I did not do any fancy scripting. I copied each kubeN-20-clct-etcd-member.conf up to each server manually, as well as ca-etcd.crt, kubeN-etcd.crt, and kubeN-etcd.key. Originally, I did the sensible thing and put ca-etcd.crt and kubeN-etcd.crt in /etc/ssl/certs and kubeN-etcd.key into /etc/ssl/private. That is the correct way to do it. Unfortunately etcd itself runs inside a rkt container, and only gets /etc/ssl/certs mounted in from the host.

Anyway, to put it all together, repeat this on each node changing the numbers as necessary:

core@kube0 ~ $ sudo mkdir /etc/systemd/system/etcd-member.service.d
core@kube0 ~ $ sudo cp kube0-20-clct-etcd-member.conf \
               /etc/systemd/system/etcd-member.service.d/kube0-20-clct-etcd-member.conf
core@kube0 ~ $ sudo cp ca-etcd.crt kube0-etcd.crt kube0-etcd.key /etc/ssl/certs/
core@kube0 ~ $ sudo systemctl enable etcd-member
core@kube0 ~ $ sudo systemctl start etcd-member

core@kube0 ~ $ journalctl -xef

If it all works, you should see some messages about the cluster being formed, and you should not see errors.

Once the cluster is started on all three nodes, copy the client.pem certificate up to a node and give this a try:

core@kube0 ~ $ ETCDCTL_API=3 etcdctl \
                   --cacert=/etc/ssl/certs/ca-etcd.crt \
                   --cert=client.pem \
                   --key=client-key.pem \
                   --endpoints=https://192.168.227.219:2379 \
                   endpoint health
https://192.168.227.219:2379 is healthy: successfully committed proposal: took = 21.294666ms
core@kube0 ~ $ ETCDCTL_API=3 etcdctl \
                   --cacert=/etc/ssl/certs/ca-etcd.crt \
                   --cert=client.pem \
                   --key=client-key.pem \
                   --endpoints=https://192.168.228.91:2379 \
                   endpoint health
https://192.168.228.91:2379 is healthy: successfully committed proposal: took = 9.980398ms
core@kube0 ~ $ ETCDCTL_API=3 etcdctl \
                   --cacert=/etc/ssl/certs/ca-etcd.crt \
                   --cert=client.pem \
                   --key=client-key.pem \
                   --endpoints=https://192.168.225.159:2379 \
                   endpoint health
https://192.168.225.159:2379 is healthy: successfully committed proposal: took = 15.370345ms

That is good news! Now, let's try writing, reading, and deleting a key.

core@kube0 ~ $ ETCDCTL_API=3 etcdctl \
                   --cacert=/etc/ssl/certs/ca-etcd.crt \
                   --cert=client.pem \
                   --key=client-key.pem \
                   --endpoints=https://192.168.225.159:2379 \
                   put foo bar
OK
core@kube0 ~ $ ETCDCTL_API=3 etcdctl \
                   --cacert=/etc/ssl/certs/ca-etcd.crt \
                   --cert=client.pem \
                   --key=client-key.pem \
                   --endpoints=https://192.168.225.159:2379 \
                   get foo
foo
bar
core@kube0 ~ $ ETCDCTL_API=3 etcdctl \
                   --cacert=/etc/ssl/certs/ca-etcd.crt \
                   --cert=client.pem \
                   --key=client-key.pem \
                   --endpoints=https://192.168.228.91:2379 \
                   get foo
foo
bar
core@kube0 ~ $ ETCDCTL_API=3 etcdctl \
                   --cacert=/etc/ssl/certs/ca-etcd.crt \
                   --cert=client.pem \
                   --key=client-key.pem \
                   --endpoints=https://192.168.227.219:2379 \
                   get foo
foo
bar
core@kube0 ~ $ ETCDCTL_API=3 etcdctl \
                   --cacert=/etc/ssl/certs/ca-etcd.crt \
                   --cert=client.pem \
                   --key=client-key.pem \
                   --endpoints=https://192.168.227.219:2379 \
                   del foo
1
core@kube0 ~ $ ETCDCTL_API=3 etcdctl \
                   --cacert=/etc/ssl/certs/ca-etcd.crt \
                   --cert=client.pem \
                   --key=client-key.pem \
                   --endpoints=https://192.168.228.91:2379 \
                   get foo
core@kube0 ~ $ ETCDCTL_API=3 etcdctl \
                   --cacert=/etc/ssl/certs/ca-etcd.crt \
                   --cert=client.pem \
                   --key=client-key.pem \
                   --endpoints=https://192.168.228.91:2379 \
                   get foo
core@kube0 ~ $

Yay!

Next Time

On the next episode we will manually configure and deploy Kubernetes.