AWS VPN to TP-LINK Archer VR900

I use Vodafone broadband, which came with a locked-down router, which prevented me from setting up my LAN as I wanted (e.g. you can’t turn off the DHCP service, which is really annoying). I therefore invested in a better vDSL router, in my case a TP-LINK Archer VR900.

I’ve been working with AWS, and using supported equipment to connect IPSec VPNs. It occurred to me that the VR900 would probably support this. It does, and I got it working… The full details are in the document below.

AWS CloudFormation 101 (&2), and music…?

Having had a bit of a play with Terraform (see previous post), I since landed a new contract at EAMS Group ( So far so good, they seem like a very friendly, dedicated and talented bunch of people – hopefully I’ll fit in…
Tasked with providing a portable, reusable infrastructure-as-code system for a greenfield project (but with the intention to reuse for future projects, and possibly retrofit into existing systems, longer-term), I’ve been trying out CloudFormation.
It’s a strange system, with many limitations, which can be worked around – but the amount of reading around required is vast. Many avenues hit dead ends by default (e.g. using standard parameters seems to be the way forward, until you realise that it is a very blunt and imprecise instrument). Still, you learn by trying/doing/reading – and more recently by picking up on the received wisdom of those who have been through the pain, and can advise on the ways around most of it. Check out – I’m currently going through their AWS Advanced CloudFormation course.

I have always learned by doing, but having started learning classical guitar five years ago, I have realised (a little late in life, I wish I’d worked this out before) that being taught something is not a lesser way to learn than self-teaching. With guitar, it is really easy to teach yourself bad techniques and then get stuck with them – this is not down to lack of hard work or talent on anyone’s part, but simply because centuries of human-wisdom are contained in the standard teaching practices of high-level teachers – and if you self-teach, you don’t get that! The similarities with computing are obvious – I’m so glad that I had the grounding in computer science when I was younger, a lot of it is not *directly* applicable, but the background knowledge helps me to understand other things.

Back to CloudFormation – if you do nothing else, look into Mappings… Better still, take the same (or another) course which fills in all those “learned from the frontline” lessons.

Progress so far: git repo with CF templates, scripts and puppet configuration created. CF Templates build systems, and in the case of the puppet server, install the server, pull the config from S3 (cloned from git) and start it up. DNS (Route53) configured on the way 🙂 Other clients install the puppet agent and attach to the appropriate puppet server in the correct client/environment combination – and away we go…

AWS + Terraform + Puppet – 102?

I was considering how (having not worked in an environment where all this stuff has been really created from scratch) how I would go about creating an infrastructure from code, from scratch and make it reproducible.
If you read the press, this is what everyone is supposedly doing, but I haven’t worked anywhere where all of it sits together as it should – usually because of legacy infrastructure, multiple tools etc.

So, given that:-
1. There are seemingly hundreds of tools to choose from;
2. I know puppet already;
3. I have just complete my AWS developer associate certification, so I know a bit about that;
4. I have been trying out Terraform, and have been quite impressed;

I have come up with the following workflow using these tools, but the concepts should work with other varieties (NB: purely for EC2 instances at the moment):-


Terraform deploys base instance to EC2 puppet server
IAM role added to allow aws CLI to function (not needed for this, but for other admin)

Terraform puppet server bootstrap script:-
1. Installs aws tools
2. Copies s3 ux1_backup/puppet to the right places (i.e. recovers the puppet server config/data)
3. Starts puppet server
4. Starts puppet agent (which will install all other software and configure)


Terraform deploys base instances to other EC2 servers
Bootstrap script:-
1. On the instance: Installs puppet agent
2. On the instance: connects
3. On the puppet server: Check for a cert-signing request from the instance, sign if it looks correct
4. On the instance: ensure puppet agent is running and tests ok
5. Create a semaphore file to indicate recovery is needed (e.g. /recoverme)


All EC2 servers
1. Puppet deploys a backup/recovery script and schedules
2. The script checks for the recovery semaphore file on the instance – if there, recovers configuration and data from the most recent backup (NB: The S3 bucket should be configured with versioning on, copies to glacier ASAP), and clears the semaphore. If not there, runs a backup to the S3 bucket.

Seems like it should work, and once puppet is configured, it should be possible to destroy and recreate everything through terraform.

NB: Terraform config probably needs to be kept on github or similar – with instructions how to obtain the AWS credentials.

Now to try it out…

AWS + Terraform 101

So – I’ve used AWS in a production environment, having landed a contract where the previous admin had left (and left no documentation), and no one else there knew anything about AWS. I had to learn quickly, but it was limited to EC2, RDS & S3.
I realised I had loads more to learn (and still do) – so I took the acloudguru AWS Certified Developer Associate course in order to gain more knowledge (I also took the cert exam, as people seem to like certifications – so I’m AWS certified, but still feel like I only have my toes in the water).

AWS has CloudFormation, but I also keep coming across Terraform which is an open source, “cross-cloud” (a new term?) equivalent, as far as I can tell. So I have been investigating…

As always, learning by example is a good way to to begin (well, it works for me) – so I found an excellent article here Clone the project from github, and with some minor modification (point to different key files, different AMIs and subnet addresses) this was all that was needed to fire up a VPC, public and private subnets, a NAT instance, and web server in the public subnet and a DB server in the private subnet. Wow – cool 🙂

Terraform also has a graph output which is compatible with GraphViz. A quick “terraform graph | dot -Tpng > graph.png” provides the following. Maybe not the easiest to read in this exact format, but it’s a nice quick and dirty way to see what’s set up.

Getting this all working is probably 50% of the effort – now I need to refine it, and work out how to make all my infrastructure changes in Terraform, rather than directly.
“Infrastructure as Code”, here we go…

Git explained (ish)

If, like me, and the author of the article below – you’ve been wondering why everyone else seems to understand git and you think you’re a fool because you find it fairly incomprehensible beyond parrot-fashion use of basic commands – read the article below. It explains the object database and other contents of the .git directory. You shouldn’t need to know this to use software which is supposed to make your life easier, but I found it very helpful.

If you learn nothing else – pick up the lesson that each .git/objects/NN/MMMMM… directory/filename is generated from a hash of the contents, and that you can read the contents as follows:
git cat-file -p NNMMMMM… (where “NNMMMMM…” is the combination of the directory name “NN” and filename “MMMMM…”, which will be a further 38 characters.
NB: With git cat-file, it expects the object hash as the argument (40 characters), and you can run it from anywhere within the project tree – you *don’t* pass it the filename.

Here’s the article:

If, like me (and the author) – you work better with systems when you can see what they’re doing under the metaphorical hood, this is an epiphany. Create a repository, have a look in .git at the objects (use git cat-file), then look at the refs/heads directory – create a branch, you’ll see that the refs/heads directory now has a new file with the name of your branch, and the contents (which is a hash pointing to a commit object) is the same as the contents of refs/heads/master. Simple when you can see what it’s doing. Switch to your branch (git checkout) and make/commit a change. Lo and behold, your refs/heads/branchname file has a different hash – you can go and find it in the objects directory.

More detailed info about everything git is here

And: a “commit” is just a pointer to a tree (essentially), and a tree is just a pointer to trees and “blobs”, which basically = directories and files (in the state they were at when the commit was created). So when you look at ‘git log’, you’re just seeing a list of commit hashes – you can follow it down the structure by using git cat-file on the tree(s) until you get to the blob (i.e. file), which you can git cat-file as well.


A couple of clients have used Confluence as their infrastructure information/documentation management system.
I was very impressed – having tried various Wikis, text file repositories and other note-taking tools – this is by far the best one (in my experience at least).
For up to ten users, it’s only a $10 licence fee – I have therefore set up Linux VM + confluence server as my own documentation system (sadly I cannot share this out) to complement the github repo (see here) I use for smaller text files and notes.

Combined with for diagramming (think – a bit like Visio), it provides a quick and easy method of storing notes, documentation and diagrams – and it’s accessible from any web browser.

If you’re a freelancer, and need to keep technical information to hand, I would recommend investigating this option.

Technology updates

After, unforgiveably, not updating this web site for quite some time, here is a summary of what I have been up to for the past two years.
Technology is moving on rapidly, and we need to keep up-to-date. Many years of Unix/Linux experience is no longer enough on its own, the world is moving to ‘meta tools’ (the ones which we wrote for ourselves in the past!), but these are now becoming more standardised (though there are many to choose from in each area!), and so are becoming the new building blocks for IT systems.

  • Linux systems administration: This has been the core of everything for many years (along with Solaris before that), and it is not expected to go away, as most of the more recent abstracted technologies still rely on Linux VMs in one form or another. Shell scripting and command-line access to the newer tools will still be needed for the foreseeable future.
  • Configuration management: Puppet – I now have 1.5 years experience with this, and what a great tool it is. I have used it for information-gathering (where no infrastructure documentation was available), system updates, new system & application builds (modifying and creating new manifests) and all the other things it’s good at (like setting up and distributing ssh keys, firewall rules etc). I intend to take a proper look at Ansible in the near future.
  • HPC: A new area for me, I’m not an expert by any means, but was able to analyse an existing Rocks HPC cluster, replicate it, upgrade the core, rebuild the nodes and get all the applications working on the new version. I also got to grips with the job scheduling & management (torque/maui) along the way – this proved useful when I subsequently had to troubleshoot a cluster which used Sun Grid Engine – all good fun, and something I would like to delve into deeper.
  • Web applications: Mainly LAMP stack. Migrations, builds (AWS/puppet), updates and troubleshooting.
  • Public Cloud – AWS: I used AWS in production – mainly creating EC2 instances to deploy production web applications onto, along with some RDS setup & S3 use. Realising that there’s an awful lot more to AWS, I have recently independently studied for, and passed, the AWS certified develop associate exam.
  • 101: Terraform, Kubernetes: I have taken time to familiarise myself with these to a basic level (installing them, and getting a few things running).

Spacewalk 2.2 notes

Installing this from scratch with no prior experience…

NB: Spacewalk 2.2, searching the web for problems found often led down “documentation black holes” with out-of-date information for older versions.

Basic steps for future reference:-

Install a clean CentOS 6.5 server instance on VirtualBox. eth0 set up connected to the local internal network, with the host system providing DNS ( eth1 set up as bridged interface, connected to the outside world.
Add (host system) to /etc/resolv.conf.
Add PEERDNS=no to /etc/sysconfig/network-scripts to stop /etc/resolv.conf being over-written.
Follow the installation instructions at:
Connect and create user account as per instructions.
Create default activation key
Create channel for CentOS6.5
Mount CentOS6.5 image on /mnt
cp /var/www/html/pub/RHN-ORG-TRUSTED-SSL-CERT /usr/share/rhn
rhnpush –channel=centos6.5-x86_64 –server=http://localhost –dir=/mnt/Packages