My CoreOS / Kubernetes Journey: Linode

I need to learn and deploy Kubernetes quickly. I have a good bit of experience with Docker and docker-compose, and have read of the wonders of Kubernetes and of course watched Kelsey Hightower's kubernetes videos. But I have never actually used it. So here is how I did that, part the first.

Create some CoreOS Linodes

Create three new Linode 2GBs in the same datacenter (in my case, Atlanta is the closest, not that they really need to be all that physically close). Kubernetes can stretch across datacenters but that is more complicated and I don't need that use-case yet.

  1. Log into Linode Manager
  2. Under the Linode list, click 'Add a Linode'
  3. Select a Linode Instance Type (I went with Linode 2GB)
  4. Select a Location (I went with Atlanta, GA) and click "Add this Linode!"
  5. You now have a new Linode named something like linode11253922. Click it.
  6. Under Dashboard, click Deploy an Image
  7. Select the "CoreOS Container Linux" image
  8. Set Swap Disk to 128MB (you don't need swap for Kubernetes, but Linode doesn't have a "no swap" option and I don't want to deal with resizing disks, so just go with the smallest.)
  9. Update your Deployment Disk Size to the max (51072MB in my case)
  10. "Root Password" is actually going to be the core user's password. Set it to something easy.
  11. Click Deploy
  12. Go to the Settings tab
  13. Change the Linode Label to 'kubeN' where N is the number of this one (I went with kube0, kube1, kube2)
  14. Change Display Group to coreos or kubernetes or something that will group them together on the Linode page
  15. Make sure Lassie is enabled; CoreOS does its own updates and reboots, and Lassie will make sure it comes back up
  16. click Save Changes
  17. Go to the Remote Access tab
  18. Click "Add a Private IP"
  19. Make note of both the Public and the Private IP, we will need those later
  20. Go to step 2 and repeat until you have all three Linodes looking pretty in the Linode Manager

You should now have three Linodes. Now, go to the DNS Manager.

  1. Edit the Domain Zone you want these in (mine is jefftickle.com).
  2. Add a new A/AAAA record
  3. Set the hostname to something like kube0.example.com (I used kube0.jefftickle.com)
  4. Set the IP Address to the Public IP that you noted in step 19 above
  5. Save Changes (unless you want a tiny TTL but that's really not necessary unless you mess up)
  6. Go to 1 and repeat until you have a hostname for each CoreOS Linode

By default, Linode sets your hostname to something like li319-28.members.linode.com. This is just one more alternate name you will have to deal with in certificates later, and it's incorrect DNS to leave it this way since we will be using real DNS. You will have to wait until DNS has updated to do this, but you can set the reverse DNS hostname for each Linode.

  1. Edit one of the kube Linodes
  2. Go to the Remote Access tab
  3. Under 'Public IPs', click Reverse DNS
  4. Enter the real public domain name for this system (in my case, for kube0, I entered kube0.jefftickle.com)
  5. Click 'Look up'
  6. You should see a green box saying 'Match Found!' If not, you may not have set up DNS correctly in the previous section, or you might not have waited long enough.
  7. Click 'Yes' in the green box.
  8. Wait 24 hours apparently...........
  9. Go to step 1 and repeat for each Linode

In my case, here are the notes from the created nodes.

linode reverse dns hostname private ip public ip
kube0 kube0.jefftickle.com 192.168.227.219 66.228.62.28
kube1 kube1.jefftickle.com 192.168.228.91 45.79.202.82
kube2 kube2.jefftickle.com 192.168.225.159 23.239.19.243

Now you have three Linodes with nice official DNS domain names. It may take up to 15 minutes for the DNS manager to tell the world about that.

OS Configuration

Linode does not yet have any support for CoreOS Ignition Configs. That would be really nice, but it's just not there yet. They may have added it by now though, so you should check on that because it makes this guide halfway obsolete if they do.

We now want to push a public key to each new CoreOS system for the core user, and disable SSH password login. We also want to set up the Linode static private IP address manually, and restart networking.

I made a handy script to do this for me. If you choose to use this script, be mindful that you change the hostnames and private IPs in the HOSTS variable. Since Linode private IPs always start with 192.168, I only configure the last two octets in the HOSTS variable. Also, on the first run through, you will need to accept host SSH keys and enter your password, once per node. This script will work on any number of nodes, just follow the pattern in the HOSTS variable.

#!/usr/bin/env bash
# bootstrap-coreos.sh

HOSTS="kube0.jefftickle.com:227.219 kube1.jefftickle.com:228.91 kube2.jefftickle.com:225.159"

for DATA in $HOSTS; do
  HOST=$(echo $DATA | cut -d : -f 1)
  PRVT=$(echo $DATA | cut -d : -f 2)
  REMOTE="core@$HOST"

  # Copy SSH Key
  ssh-copy-id "$REMOTE"

  # Disable password for user 'core'
  ssh $REMOTE sudo passwd -d core

  # Configure systemd-network for Linode private IP
  ssh $REMOTE cat /etc/systemd/network/05-eth0.network |grep Address > /dev/null
  if [ $? -ne 0 ]; then
    ssh $REMOTE sudo tee -a /etc/systemd/network/05-eth0.network <<NETWORK

[Address]
Label=eth0:0
Address=192.168.$PRVT/17
NETWORK
  fi

  # Restart networking to apply the systemd change
  ssh $REMOTE sudo systemctl restart systemd-networkd
done

Create Cluster Start/Stop Scripts

At this point I had to go to bed... but while shutting down all the systems, I got annoyed, since I will be starting and stopping this cluster fequently over the next week. After all, have to stop the cluster to save money on that hourly billing! So, I made some start/stop scripts. Originally I had planned to use Ansible for all of this cluster management, but CoreOS does not come with Python and I want to change the base system as little as possible.

If you choose to use these scripts, REMOTE should be set to your LISH user@host, and HOSTS should be the Linode names in your cluster (which could differ from the hostnames if you are a crazy person).

Cluster Start Script

#!/usr/bin/env bash
# start-kube-cluster.sh

REMOTE=user@lish-atlanta.linode.com
HOSTS="kube0 kube1 kube2"

for HOST in $HOSTS; do

  # Get Status
  STATUS=$(ssh -t $REMOTE $HOST status 2>/dev/null)

  # If powered off, power it on
  echo $STATUS | grep 'Powered Off' > /dev/null 2>&1
  if [ $? -eq 0 ]; then
    echo "Starting $HOST..."
    ssh -t $REMOTE $HOST boot
  fi

  # If running, just notify
  echo $STATUS |grep 'Running' > /dev/null 2>&1
  if [ $? -eq 0 ]; then
    echo "$HOST is already running"
  fi

  # If something else, weird...
  echo $STATUS |egrep 'Powered Off|Running' > /dev/null 2>&1
  if [ $? -ne 0 ]; then
    echo "Skipping $HOST which is in state: $STATUS"
  fi
done

Cluster Stop Script

#!/usr/bin/env bash
# stop-kube-cluster.sh

REMOTE=user@lish-atlanta.linode.com
HOSTS="kube0 kube1 kube2"

for HOST in $HOSTS; do

  # Get Status
  STATUS=$(ssh -t $REMOTE $HOST status 2>/dev/null)

  # If powered on, power it of
  echo $STATUS | grep 'Running' > /dev/null 2>&1
  if [ $? -eq 0 ]; then
    echo "Stopping $HOST..."
    ssh -t $REMOTE $HOST shutdown > /dev/null 2>&1
  fi

  # If running, just notify
  echo $STATUS |grep 'Powered Off' > /dev/null 2>&1
  if [ $? -eq 0 ]; then
    echo "$HOST is already powered off"
  fi

  # If something else, weird...
  echo $STATUS |egrep 'Powered Off|Running' > /dev/null 2>&1
  if [ $? -ne 0 ]; then
    echo "Skipping $HOST which is in state: $STATUS"
  fi
done

Next Time

On the next episode we will manually configure and deploy etcd.