Fabrications

I’m having so much fun doing devops these days it deserves some mentioning (probably more than once).

In particular I’ve been using Fabric for work lately, and this weekend I decided to replace all of my Puppet scripts with it.

The big difference for me isn’t that Fabric is -based. Sure, that helps tremendously (I’ll freely admit to being biased and having little time to waste remembering how to do some stuff in ), but rather that it’s imperative rather than declarative.

Puppet does a very nice job of abstracting the irritating little variations between systems, but I found that it often went about doing things in the worst possible order (yeah, even with explicit dependencies and ~> ordering arrows) and that it was needlessly hacky to do something as simple as unpacking a tarball and building it.

With Fabric, if something breaks at least I explicitly know why and in which order – it forces me to be more careful in what I specify. Some people would consider that to be harder work than asking Puppet to do something, but it also prevents the kind of “oops” moments you get when Puppet decides to go and take away a system user from all your machines or destroy your hand-crafted cluster configuration.

Fabric also tends to be a lot more self-contained and easier to re-use in different circumstances without having to cart around a bunch of manifests and submodules (at least for my uses).

Anyway, I decided to build myself a development VM to use during Summertime, since I’ll be traipsing around in random days, it’ll be hellishly hot (it already is) and I’ll be using an and little else1.

Digital Ocean sprung to mind immediately, since they have:

  1. Cheap hourly billing (something Linode doesn’t, alas).
  2. A nice API I can trivially invoke from Pythonista to start/stop a VM.
  3. A fast European datacenter (Amsterdam, in this case), so latency becomes much less of a problem.

If you’d like to give them a try, I suggest you follow this referral link, which takes you directly to their homepage (where you’ll have to sign up and create a droplet before the referral takes effect)

I also decided I’d start doing stuff in and use Docker as the target deployment environment, which dovetailed nicely together with all the rest.

So I put together a nice little fabfile.py to deploy Docker on a new machine, which I’m posting here for your continued entertainment:

import os, sys, time, logging, subprocess
from fabric.api import env, local, hosts, parallel
from fabric.operations import run, sudo, put, hide, settings
from fabric.contrib.files import contains, exists, comment, append, cd
from StringIO import StringIO

def tarball(url=None, target='/tmp', ext="tar.gz"):
    """Downloads and unpacks a tarball"""
    if url:
        package = '/tmp/package.%s' % ext
        with cd('/tmp'):
            run('/usr/bin/wget --no-check-certificate -O %s "%s"'
                % (package, url))
        sudo('tar -zxvf %s -C %s' % (package, target))


def is_installed(package):
    """Checks if a given package is installed"""
    print("Checking for %s" % package)
    with settings(warn_only=True), hide('running', 'warnings', 'output'):
        return run('dpkg -s %s' % package).succeeded


def install(package):
    """Installs a given package"""
    if not is_installed(package):
        with settings(warn_only=True), hide('running', 'warnings', 'output'):
            return sudo('apt-get -y install %s' % package).succeeded


def apt_update(force=True):
    """Updates the package list"""
    with hide('warnings', 'output'):
        if force or ((time.time() - int(run('stat -t /var/cache/apt/pkgcache.bin').split( )[12])) > 3600*24*7):
            print "Running apt-get update"
            sudo('apt-get -y update')


def copy_ssh_key():
    """Copy our SSH key across"""
    if not contains('~/.ssh/authorized_keys', os.environ['USER']):
        print "Copying our private key across"
        append(open(os.path.join(os.environ['HOME'], '.ssh', 'id_rsa.pub'), 'r'), '~/.ssh/authorized_keys', use_sudo=True)


def lockdown_ssh(custom_port = 22):
    """Locks down the SSH server""" 
    config_file = '/etc/ssh/sshd_config'
    custom_port = str(custom_port)
    restart = False
    if not contains(config_file, 'Port ' + custom_port):
        print "Moving SSH to port " + custom_port
        comment(config_file, '^Port', use_sudo=True)
        append(config_file, 'Port ' + custom_port, use_sudo=True)
        restart = True
    if not contains(config_file, 'PasswordAuthentication no'):
        print "Disabling password authentication"
        comment(config_file, '^PasswordAuthentication', use_sudo=True)
        append(config_file, 'PasswordAuthentication no', use_sudo=True)
        restart = True
    if restart:
        print "Restarting SSH"
        sudo("service ssh restart")
        
        
def deploy_docker():
    """Deploys Docker and the right kernel - you should reboot afterwards"""
    if not exists('/etc/apt/sources.list.d/dotcloud-lxc-docker-raring.list'):
        print "Adding docker PPA"
        with hide('running', 'warnings', 'output'):
            sudo('add-apt-repository -y ppa:dotcloud/lxc-docker')
        print "Installing docker"
        with hide('running', 'warnings', 'output'):
            sudo('apt-get -y update')
            sudo('apt-get -y install linux-image-extra-`uname -r`')
            print "Deployed new kernel. You should reboot afterwards."
            sudo('apt-get -y install lxc-docker')
            

def disable_ipv6():
    """Disables IPv6"""
    config_file = '/etc/sysctl.conf'
    if not contains(config_file, 'net.ipv6.conf.all.disable_ipv6 = 1'):
        append(config_file, 'net.ipv6.conf.all.disable_ipv6 = 1', use_sudo=True)
        append(config_file, 'net.ipv6.conf.default.disable_ipv6 = 1', use_sudo=True)
        append(config_file, 'net.ipv6.conf.lo.disable_ipv6 = 1', use_sudo=True)
        sudo('sysctl -p')


def enable_ip_forward():
    """Enables IPv4 Forwarding"""
    config_file = '/etc/sysctl.conf'
    if not contains(config_file, 'net.ipv4.ip_forward = 1'):
        append(config_file, 'net.ipv4.ip_forward = 1', use_sudo=True)
        sudo('sysctl -p')


def check_docker_network():
    """Checks if the Docker masquerading rules are active and fixes them if necessary"""
    enable_ip_forward()
    try:
        sudo('iptables -L -t nat -n | grep MASQUERADE | grep 172.16.42.0')
    except:
        sudo('killall -9 docker')
        sudo('rm /var/run/docker.pid')
        sudo('iptables -t nat -F')
        sudo('ifconfig docker0 down')
        sudo('brctl delbr docker0')
        sudo('docker -d')

        
def provision():
    """Set up a droplet from scratch - assumes you'll be setting @hosts and suchlike"""
    with hide('running'):
        copy_ssh_key()
        lockdown_ssh()
        apt_update(force=True)
        map(install,['fabric', 'tmux', 'htop', 'vim', 'ufw', 'denyhosts', 'software-properties-common', 'bash-completion'])
        deploy_docker()

…and set up a variant of the following script on my to start/stop the VM remotely:

import os, sys, urllib2

api_key = '<your api key>'
client_id = '<your client id>'
droplet_id = '<your droplet id>'

base = "https://api.digitalocean.com/droplets"

operations = {
    'status': "%(base)s/?client_id=%(client_id)s&api_key=%(api_key)s" % locals(),
    'off'   : "%(base)s/%(droplet_id)s]/power_off/?client_id=%(client_id)s&api_key=%(api_key)s" % locals(),
    'on'    : "%(base)s/%(droplet_id)s]/power_on/?client_id=%(client_id)s&api_key=%(api_key)s" % locals()
}

command = sys.argv[-1:][0]

if command in operations:
    print "Invoking %s" % command
    print urllib2.urlopen(operations[command]).read()
else:
    print "Available operations:", ' '.join(operations.keys())

So far, everything’s breezy. Which is to say, rather better than the awful heat outside.


  1. My will be along most of the time, but there’s a limit to what it can do (it doesn’t have enough RAM to run Chromium or Firefox, for instance, something I do on my with ease). ↩︎

This page is referenced in: