My CoreOS / Kubernetes Journey: Linode
I need to learn and deploy Kubernetes quickly. I have a good bit of experience with Docker and docker-compose, and have read of the wonders of Kubernetes and of course watched Kelsey Hightower's kubernetes videos. But I have never actually used it. So here is how I did that, part the first.
- CoreOS/Kubernetes Part 0: Linode
- CoreOS/Kubernetes Part 1: etcd
- CoreOS/Kubernetes Part 2: Kubernetes
Create some CoreOS Linodes
Create three new Linode 2GBs in the same datacenter (in my case, Atlanta is the closest, not that they really need to be all that physically close). Kubernetes can stretch across datacenters but that is more complicated and I don't need that use-case yet.
- Log into Linode Manager
- Under the Linode list, click 'Add a Linode'
- Select a Linode Instance Type (I went with Linode 2GB)
- Select a Location (I went with Atlanta, GA) and click "Add this Linode!"
- You now have a new Linode named something like linode11253922. Click it.
- Under Dashboard, click Deploy an Image
- Select the "CoreOS Container Linux" image
- Set Swap Disk to 128MB (you don't need swap for Kubernetes, but Linode doesn't have a "no swap" option and I don't want to deal with resizing disks, so just go with the smallest.)
- Update your Deployment Disk Size to the max (51072MB in my case)
- "Root Password" is actually going to be the core user's password. Set it to something easy.
- Click Deploy
- Go to the Settings tab
- Change the Linode Label to 'kubeN' where N is the number of this one (I went with kube0, kube1, kube2)
- Change Display Group to coreos or kubernetes or something that will group them together on the Linode page
- Make sure Lassie is enabled; CoreOS does its own updates and reboots, and Lassie will make sure it comes back up
- click Save Changes
- Go to the Remote Access tab
- Click "Add a Private IP"
- Make note of both the Public and the Private IP, we will need those later
- Go to step 2 and repeat until you have all three Linodes looking pretty in the Linode Manager
You should now have three Linodes. Now, go to the DNS Manager.
- Edit the Domain Zone you want these in (mine is jefftickle.com).
- Add a new A/AAAA record
- Set the hostname to something like kube0.example.com (I used kube0.jefftickle.com)
- Set the IP Address to the Public IP that you noted in step 19 above
- Save Changes (unless you want a tiny TTL but that's really not necessary unless you mess up)
- Go to 1 and repeat until you have a hostname for each CoreOS Linode
By default, Linode sets your hostname to something like li319-28.members.linode.com. This is just one more alternate name you will have to deal with in certificates later, and it's incorrect DNS to leave it this way since we will be using real DNS. You will have to wait until DNS has updated to do this, but you can set the reverse DNS hostname for each Linode.
- Edit one of the kube Linodes
- Go to the Remote Access tab
- Under 'Public IPs', click Reverse DNS
- Enter the real public domain name for this system (in my case, for kube0, I entered kube0.jefftickle.com)
- Click 'Look up'
- You should see a green box saying 'Match Found!' If not, you may not have set up DNS correctly in the previous section, or you might not have waited long enough.
- Click 'Yes' in the green box.
- Wait 24 hours apparently...........
- Go to step 1 and repeat for each Linode
In my case, here are the notes from the created nodes.
|linode||reverse dns hostname||private ip||public ip|
Now you have three Linodes with nice official DNS domain names. It may take up to 15 minutes for the DNS manager to tell the world about that.
Linode does not yet have any support for CoreOS Ignition Configs. That would be really nice, but it's just not there yet. They may have added it by now though, so you should check on that because it makes this guide halfway obsolete if they do.
We now want to push a public key to each new CoreOS system for the core user, and disable SSH password login. We also want to set up the Linode static private IP address manually, and restart networking.
I made a handy script to do this for me. If you choose to use this script, be mindful that you change the hostnames and private IPs in the HOSTS variable. Since Linode private IPs always start with 192.168, I only configure the last two octets in the HOSTS variable. Also, on the first run through, you will need to accept host SSH keys and enter your password, once per node. This script will work on any number of nodes, just follow the pattern in the HOSTS variable.
#!/usr/bin/env bash # bootstrap-coreos.sh HOSTS="kube0.jefftickle.com:227.219 kube1.jefftickle.com:228.91 kube2.jefftickle.com:225.159" for DATA in $HOSTS; do HOST=$(echo $DATA | cut -d : -f 1) PRVT=$(echo $DATA | cut -d : -f 2) REMOTE="core@$HOST" # Copy SSH Key ssh-copy-id "$REMOTE" # Disable password for user 'core' ssh $REMOTE sudo passwd -d core # Configure systemd-network for Linode private IP ssh $REMOTE cat /etc/systemd/network/05-eth0.network |grep Address > /dev/null if [ $? -ne 0 ]; then ssh $REMOTE sudo tee -a /etc/systemd/network/05-eth0.network <<NETWORK [Address] Label=eth0:0 Address=192.168.$PRVT/17 NETWORK fi # Restart networking to apply the systemd change ssh $REMOTE sudo systemctl restart systemd-networkd done
Create Cluster Start/Stop Scripts
At this point I had to go to bed... but while shutting down all the systems, I got annoyed, since I will be starting and stopping this cluster fequently over the next week. After all, have to stop the cluster to save money on that hourly billing! So, I made some start/stop scripts. Originally I had planned to use Ansible for all of this cluster management, but CoreOS does not come with Python and I want to change the base system as little as possible.
If you choose to use these scripts, REMOTE should be set to your LISH user@host, and HOSTS should be the Linode names in your cluster (which could differ from the hostnames if you are a crazy person).
Cluster Start Script
#!/usr/bin/env bash # start-kube-cluster.sh REMOTEfirstname.lastname@example.org HOSTS="kube0 kube1 kube2" for HOST in $HOSTS; do # Get Status STATUS=$(ssh -t $REMOTE $HOST status 2>/dev/null) # If powered off, power it on echo $STATUS | grep 'Powered Off' > /dev/null 2>&1 if [ $? -eq 0 ]; then echo "Starting $HOST..." ssh -t $REMOTE $HOST boot fi # If running, just notify echo $STATUS |grep 'Running' > /dev/null 2>&1 if [ $? -eq 0 ]; then echo "$HOST is already running" fi # If something else, weird... echo $STATUS |egrep 'Powered Off|Running' > /dev/null 2>&1 if [ $? -ne 0 ]; then echo "Skipping $HOST which is in state: $STATUS" fi done
Cluster Stop Script
#!/usr/bin/env bash # stop-kube-cluster.sh REMOTEemail@example.com HOSTS="kube0 kube1 kube2" for HOST in $HOSTS; do # Get Status STATUS=$(ssh -t $REMOTE $HOST status 2>/dev/null) # If powered on, power it of echo $STATUS | grep 'Running' > /dev/null 2>&1 if [ $? -eq 0 ]; then echo "Stopping $HOST..." ssh -t $REMOTE $HOST shutdown > /dev/null 2>&1 fi # If running, just notify echo $STATUS |grep 'Powered Off' > /dev/null 2>&1 if [ $? -eq 0 ]; then echo "$HOST is already powered off" fi # If something else, weird... echo $STATUS |egrep 'Powered Off|Running' > /dev/null 2>&1 if [ $? -ne 0 ]; then echo "Skipping $HOST which is in state: $STATUS" fi done