This is the fourth in a series of posts about running your own Git server on an EC2 image. The posts include:

In my previous post I explained how to request the initiation of an EC2 Spot Instances, and I gave an overview of the scripts that provision the server during the boot process. In this post I present the detail of those scripts.

Why Python?

The second-setup.sh script described in the previous post, invoked several Python scripts to invoke certain EC2 actions. I used Python because it was easier than shell programming to invoke the with parameters only discoverable at run time. And I chose Python because the excellent boto library is installed on Ubuntu EC2 AMIs.

Attach and Mount Volumes

The system is configured with two volumes from the get go—the EC2 AMI volume and the small “secrets” volume; both were specified as arguments to the spot instance request. However, I need another volume that will persist across instance termination and recreation, and will store the gitolite installation and the hosted git repositories. And I want to have another different volume to host my working files (should I decide to develop directly on the server); it will also persist across instance termination and recreation.

Because the volumes must survive instance termination they must be EBS volumes. EBS volumes have IDs that are both unique and unmemorable; if I resize a volume it will get a new ID; and I may have many volumes associated with my AWS account. I needed some way to discover the particular volume I want to mount. The solution I’ve come up with is to add tags to volumes I want to load at run time.

I use two tags to identify a volume: mount-host and mount-name. For instance, for the git repo volume the tags and values are:

  • mount-host = dev.bob.org
  • mount-name = data

For my development files volume, the tags and values are:

  • mount-host = dev.bob.org
  • mount-name = dev

To locate and attach the data volume, the attachTaggedVolume.py program can be invoked with:

python /secrets/setup/attachTaggedVolume.py i-1234dc4a /dev/sdg \
  dev.bob.org data</code>

Here is the Python code:

#! /usr/bin/python
#
import sys
import os
import time
import boto.ec2.connection

# args are instance id, device,
# mount-host tag value, and mount-name tag value
#
if (5 != len(sys.argv)):
    print 'Missing or extra argument!'
    exit(1)
iid = sys.argv[1]
dvc = sys.argv[2]
mhost = sys.argv[3]
mname = sys.argv[4]

# Get the access key, and secret key
#
f = open('/secrets/setup/ak', 'r')
ak = f.readline()
f.close()
f = open('/secrets/setup/sk', 'r')
sk = f.readline()
f.close()

# Connect to EC2
#
c = boto.ec2.connection.EC2Connection(
  aws_access_key_id=ak,
  aws_secret_access_key=sk)

# Find the volume to mount
#
mvol = None
filter = { 'tag:mount-host' : mhost }
vols = c.get_all_volumes(filters = filter)
for vol in vols:
    vid = vol.id
    tfilter = { 'resource-id' : vid, 'key' : 'mount-name' }
    tags = c.get_all_tags(tfilter)
    if (0 == len(tags)):
        continue
    if (tags[0].value != mname):
        continue
    mvol = vol
    break

if (mvol == None):
    print "No volume found with matching tags"
    exit(1)

# Attach the volume
#
status = c.attach_volume(mvol.id, iid, dvc)
if ('attaching' != status):
    print "attach_volume returned '{0}'".format(status)
    print "Could not attach volume"
    exit(2)
dev_suffix = dvc[6:]
device = "/dev/xv" + dev_suffix
for x in range(1,10):
    ok = False
    try:
        s = os.stat(device)
        ok = True
    except:
        print "device {0} does not yet exist".format(device)
        pass
    if True == ok:
        print "attached."
        exit(0)
    time.sleep(2)
print "Timed out trying to attach the volume"
exit(3)

There is nothing very difficult here, but it is certainly possible that the code could be changed to be more efficient or robust. The only tricky bit is the check at the end of the program to see if the volume has been mounted; I’ll discuss that in a bit.

The first thing the script does is grab the AWS access key and secret key from the /secrets volume. Then it uses boto to establish a connection to the AWS management web service. With a valid connection, the script uses the mount host and mount name tags to find the volume to be mounted; if there is more than one volume that matches those tags, the code will pick the first one found.

Using the AWS connection, the program attaches the volume with the requested mount point. The attach_volume() method does not block; it is making a request that gets fulfilled some time later. But the provisioning script is not designed to proceed until the attachment completes, so I put in a 20 second timer, with a check every 2 seconds, waiting for the attach to complete. It usually does so within 10 seconds – I’ve been running this way for 5 months and have not seen the 20 second timer exceeded yet. This feels kludgey (agile developers might say it has a smell); if there is a better way to handle the wait for attach completion, I would be very glad to hear it.

Updating Route53 mapping

The other Python script updates AWS Route53 information. I have created a hosted zone in Route53 and assigned the Route53 name servers to my domain. Then I can create CNAME records in the DNS info within the hosted zone; specifically I can create a CNAME record that maps my unchanging friendly hostname (e.g. dev.bob.org) to the external unfriendly host name generated by AWS each time an instance is created.

This remapping is accomplished by the ‘setr53.py’ Python program. It can be invoked like this:

python /secrets/setup/setr53.py dev.bob.org ec2-55-134-121-32.compute-1.amazonaws.com</code>

It looks like this:

#! /usr/bin/python
#
import sys
import boto.ec2.connection
import boto.route53.connection
import boto.route53.hostedzone
import boto.route53.record
import boto.route53.exception

# args are  public-hostname and cname
#
if (3 != len(sys.argv)):
    print 'Missing or extra argument!'
    exit(1)
phn = sys.argv[1]
nm = sys.argv[2]

# Get the access key, secret key, and Route 53 hosted zone
#
f = open('/secrets/setup/ak', 'r')
ak = f.readline()
f.close()
f = open('/secrets/setup/sk', 'r')
sk = f.readline()
f.close()
f = open('/secrets/setup/hz', 'r')
hz = f.readline()
f.close()

# Connect to R53
#
r53c = boto.route53.connection.Route53Connection(
    aws_access_key_id=ak,
    aws_secret_access_key=sk)

# There should be only one or zero CNAME records for the the new
# CNAME. But during development of this program I sometimes created
# multiple name. I found I had to eliminate all but one of them.
#
# So, if there are more than one extant matching CNAME record, start
# deleting them, one at a time, until only one remains.
#
chgs = boto.route53.record.ResourceRecordSets(
    r53c, hz)
rrs = r53c.get_all_rrsets(hz, type='CNAME', name=nm)
while 1 &lt; len(rrs):
    for rr in rrs:
        print "deleting {0} CNAME {1}".format(nm, rr.resource_records[0])
        chg = chgs.add_change(
            'DELETE', nm, 'CNAME', ttl=rr.ttl)
        chg.add_value(rr.resource_records[0])
        chgs.commit()
        break
    chgs = boto.route53.record.ResourceRecordSets(
        r53c, hzid)
    rrs = r53c.get_all_rrsets(hzid, type='CNAME', name=nm)

# Make a resource record set that deletes the old CNAME record, if any,
# and adds a new CNAME record mapping the public host name to the
# desired alternate or well-known host name.
#
for rr in rrs:
    print "deleting {0} CNAME {1}".format(nm, rr.resource_records[0])
    chg = chgs.add_change(
        'DELETE', nm, 'CNAME', ttl=rr.ttl)
    chg.add_value(rr.resource_records[0])

print "creating {0} CNAME {1}".format(nm, phn)
chg = chgs.add_change(
    'CREATE', nm,'CNAME',
    ttl=300)
chg.add_value(phn)
chgs.commit()

Like the attachTaggedVolume.py program, this program first gets the AWS access key and secret key and establishes a connection to the AWS web service using boto. The program also gets the hosted zone id from the hz file, and then it locates the current CNAME record for the friendly hostname.

Once it has the old CNAME mapping, it deletes it and creates a new one with the new instance’s external hostname.

Next topics

Next post in the series will be—at last—about gitolite. I’m not certain if that will be the next post in the blog, but it will be the next post in the series. I hope to wrap up the series this month (April 2013). Here’s the remaining topics:

  • Installing, configuring, and maintaining gitolite
  • Maintaining the /secrets volume and updating the AMI
  • Bootstrapping – how to start from an AMI and build up an autoprovisioning system