Tag Archives: aws

My IAM policy is correct, but awscli isn’t working?

I ran into a weird issue recently where a single AWS EC2 instance was failing to work properly with it’s IAM role for EC2. Whilst the role allowed access to DescribeInstances action, awscli would repeatedly claim it didn’t have permission to perform that action.

For a moment I was thinking there was some bug/issue with my box and was readying the terminate instance button, but decided to check out the –debug output to see if it was actually loading the IAM role for EC2 or not.

$ aws ec2 describe-instances --instance-ids i-hiandre--region 'ap-southeast-2' --debug
2015-11-16 11:53:20,576 - MainThread - botocore.credentials - DEBUG - Looking for credentials via: env
2015-11-16 11:53:20,576 - MainThread - botocore.credentials - DEBUG - Looking for credentials via: assume-role
2015-11-16 11:53:20,576 - MainThread - botocore.credentials - DEBUG - Looking for credentials via: shared-credentials-file
2015-11-16 11:53:20,576 - MainThread - botocore.credentials - DEBUG - Looking for credentials via: config-file
2015-11-16 11:53:20,576 - MainThread - botocore.credentials - DEBUG - Looking for credentials via: ec2-credentials-file
2015-11-16 11:53:20,576 - MainThread - botocore.credentials - DEBUG - Looking for credentials via: boto-config
2015-11-16 11:53:20,576 - MainThread - botocore.credentials - INFO - Found credentials in boto config file: /etc/boto.cfg

Turns out in my case, there was a /etc/boto.cfg file with a different set of credentials – and since Boto takes preference of on disk configuration over IAM Roles for EC2, it resulted in the wrong access credentials being used.

The –debug param is a handy one to remember if you’re having weird permissions issue and want to check exactly what credentials from where are being used.

Baking images with Packer & Pupistry

One of the common issues when building modern infrastructure-as-code style systems is that whilst automation is great, it also has a habit of failing at the worst possible time. There’s nothing quite like the fun of trying to autoscale only to find that a newer version of a package breaks compatibility or the repository mirror or Puppet master has gone offline breaking the whole carefully tuned process.

Naturally this is an issue. And whilst I’ve seen some organisations simply ignore the issue and place trust in their repos and configuration management servers, I’m also too pessimistic about technology to trust numerous components for any mission critical applications.

Fortunately there is a solution – we can bake a machine image that has all the applications and configuration pre-applied, so that autoscaling has no third party dependencies (or as close to no dependencies as we can get).

Baking has negative connotations of the bad old days when engineers would assemble custom machine images by hand and then copy them to build new systems, but it doesn’t have to be that way. We can still respect infrastructure-as-code principals and use modern tools like Puppet and Packer to reliably build consistent images as needed.

These images could be as simple as a base AMI image for Amazon AWS which includes the stock OS image plus your Puppet setup. Or they could be as complex as a fully configured and provisioned application server ready-to-go at the first boot.

To make baking images easier, I’ve added support for generating Packer templates pre-loaded with bootstrap data into Pupistry, making it quick and easy to get started. Here’s how you can use it:

Assumptions/prerequisites:

  • You’ve already got Pupistry setup and functional (No? Read the tutorial here)
  • You’ve installed the third party Packer utility.
  • You have an Amazon AWS account for doing the AMI build. Note that Packer isn’t exclusive to Amazon, so you can also use the same technique with other providers including Digital Ocean and OpenStack – but you’ll have to write your own template.

First we can list what Packer templates are available with Pupistry. If the OS/platform of your choice isn’t included, it’s not particularly hard to add it – these are mostly intended to provide a good starting point for customising your own.

pupistry packer

Screen Shot 2015-05-31 at 23.57.20

We can select a template with –template NAME and also pass the resulting output to a file with –file NAME.  The following will build an Amazon Linux template pre-loaded with Pupistry and the default manifest applied:

pupistry packer --template aws_amazon-any --file output.json

Screen Shot 2015-06-01 at 00.00.01

The generated template is a JSON file that includes various instructions to Packer on how to build the image, as well as the bootstrap data that can also be generated independently with pupistry bootstrap. Various variables can be tweaked, we can export out the variables available and see their default settings with:

packer inspect output.json

Screen Shot 2015-06-01 at 00.02.00

You can see here that we must set a VPC ID and Subnet ID – this is because they differ per AWS account and need to be provided. (Side note: technically you can do EC2 classic with Packer and avoid this, but the VPC instance types like t2 are cheaper to run… and we like cheap :-).

The AWS Region and AWS AMI values are interlinked. If you choose to build for a different region, eg us-west-1, you will need to lookup the appropriate AMI ID for that region and change both the aws_ami and aws_region variables when you bake your image. For some reason Amazon chose to make their AMI IDs specific to a particular region which really does make life a bit more difficult than it really needs to be. :-(

The hostname is worth noting. By default we set it to “packer” so you can target your manifests to handle it specifically, but you could make this anything you wanted such as a particular machine or application type. When using the sample puppet repo that ships by default with Pupistry, we have defined specific configuration to run on the Packer built images:

Screen Shot 2015-06-01 at 00.09.08

Assuming we are happy with the defaults, we just have to set the VPC and Subnet IDs to launch the current image in ap-southeast-2.

packer build \
 -var 'aws_vpc_id=vpc-example' \
 -var 'aws_subnet_id=subnet-example' \
 output.json

As soon as we kick off, we can see that Packer has built a machine in our AWS account to use for the image generation process.

Screen Shot 2015-06-01 at 00.13.53

 

It can take up to a minute for the machine to become available via SSH. Once this happens, Packer opens a connection and starts to feed in the bootstrap commands that have been added into the template by Pupistry.

Screen Shot 2015-06-01 at 00.14.23

This process can take a number of minutes – remember you’re having to install all the various OS updates and then packages and dependencies needed to run Puppet and of course Pupistry itself.

Once all the dependencies are done, Pupistry will run and provision the machine with your Puppet manifests and then return the ID of the AMI that has been generated:

Screen Shot 2015-06-01 at 00.31.57

 

We can see that Packer has now terminated our temporary machine:

Screen Shot 2015-06-01 at 00.22.28

And given us a shiny new AMI in return:

Screen Shot 2015-06-01 at 00.34.14

 

We can now use that AMI to launch a new machine and check out what Pupistry did. For convenience, there is a launch button on the AMI page that will build a new machine for the selected AMI, however you can also take the AMI ID and use it in CloudFormation, from the API or from the usual instance creation screen.

Connecting to the newly spun up instance using our fresh AMI, we can see that it has had the Pupistry rules for the packer node applied and we can also set that the daemon is configured and running in the background.

Except that it took less than 1 minute, rather than needing 5+ minutes to do all the usual updates and dependency installation. And there was no risk of a broken repository or package preventing the launch of our machine. If it was an application server, we could have preloaded it and thrown it right into an ELB within 1 minute after it starting up – that’s ideal for autoscaling!

Screen Shot 2015-06-01 at 00.38.28

Packer supports a number of different options and different providers, so don’t be afraid to pull it down and experiment. You can even write your own custom providers if needed.

Sure you could always just write a script that does all the same things as Packer for your cloud provider of choice, but Packer provides a solid framework for doing this stuff in a reliable and reproducible way saving you time and keeping complexity down.

Setting up and using Pupistry

As mentioned in my previous post, I’ve been working on an application called Pupistry to help make masterless Puppet deployments a lot easier.

If you’re new to Pupistry, AWS, Git and Puppet, I’ve put together this short walk through on how to set up the S3 bucket (and IAM users), the Pupistry application, the Git repo for your Puppet code and building your first server using Pupistry’s bootstrap feature.

If you’re already an established power user of AWS, Git and Puppet, this might still be useful to flick through to see how Pupistry fits into the ecosystem, but a lot of this will be standard stuff for you. More technical details can be found on the application README.

Note that this guide is for Linux or MacOS users. I have no idea how you do this stuff on a Windows machine that lacks a standard unix shell.

 

1. Installation

Firstly we  need to install Pupistry on your computer. As a Ruby application, Pupistry is packaged as a handy Ruby gem and can be installed in the usual fashion.

sudo gem install pupistry
pupistry setup

01-installThe gem installs the application and any dependencies. We run `pupistry setup` in order to generate a template configuration file, but we will still need to edit it with specific settings. We’ll come back to that.

You’ll also need Puppet available on your computer to build the Pupistry artifacts. Install via the OS package manager, or with:

sudo gem install puppet

 

2. Setting up AWS S3 bucket & IAM accounts

We need to use an S3 bucket and IAM accounts with Pupistry. The S3 bucket is essentially a cloud-based object store/file server and the IAM accounts are logins that have tight permissions controls.

It’s a common mistake for new AWS users to use the root IAM account details for everything they do, but given that the IAM details will be present on all your servers, you probably want to have specialised accounts purely for Pupistry.

Firstly, make sure you have a functional installation of  the AWS CLI (the modern python one, not the outdated Java one). Amazon have detailed documentation on how to set it up for various platforms, refer to that for information.

Now you need to create:

  1. An S3 bucket. S3 buckets are like domain names -they have a global namespace across *all* AWS accounts. That means someone might already have a bucket name that you want to use, so you’ll need to choose something unique… and hope.
  2. An IAM account for read-only access which will be used by the servers running Pupistry.
  3. An IAM account for read-write access for your workstation to make changes.

To save you doing this all manually, Pupistry includes a CloudFormation template, which is basically a defined set of instructions for AWS to execute to build infrastructure, in our case, it will do all the above steps for you. :-)

Because of the need for a globally unique name, please replace “unique” with something unique to you.

wget https://raw.githubusercontent.com/jethrocarr/pupistry/master/resources/aws/cfn_pupistry_bucket_and_iam.template

aws cloudformation create-stack \
--capabilities CAPABILITY_IAM \
--template-body file://cfn_pupistry_bucket_and_iam.template \
--stack-name pupistry-resources-unique

Once the create-stack command is issued, you can poll the status of the stack, you need it to be in “CREATE_COMPLETE” state before you can continue.

aws cloudformation describe-stacks --query "Stacks[*].StackStatus" --stack-name pupistry-resources-unique

02-s3-setup-init

 

If something goes wrong and your stack status is an error eg “ROLLBACK”, the most likely cause is that you chose a non-unique bucket name. If you want easy debugging, login to the AWS web console and look at the event details of your stack. Once you determine and address the problem, you’ll need to delete & re-create the stack again.

04-s3-aws-cfn-gui

AWS’s web UI can make debugging CFN a bit easier to read than the CLI tools thanks to colour coding and it not all being in horrible JSON.

 

Once you have a CREATE_COMPLETE stack, you can then get the stack outputs, which tell you what has been built. These outputs we then pretty much copy & paste into pupistry’s configuration file.

aws cloudformation describe-stacks --query "Stacks[*].Outputs[*]" --stack-name pupistry-resources-unique

03-s3-setup-explain

Incase you’re wondering – yes, I have changed the above keys & secrets since doing this demo!! Never share your access and secret keys and it’s best to avoid committing them to any repo, even if private.

Save the output, you’ll need the details shortly when we configure Pupistry.

 

3. Setup your Puppetcode git repository

Optional: You can skip this step if you simply want to try Pupistry using the sample repo, but you’ll have to come back and do this step if you want to make changes to the example manifests.

We use the r10k workflow with Pupistry, which means you’ll need at least one Git repository called the Control Repo.

You’ll probably end up adding many more Git repositories as you grow your Puppet manifests, more information about how the r10rk workflow functions can be found here.

To make life easy, there is a sample repo to go with Pupistry that is a ready-to-go Control Repo for r10k, complete with Puppetfile defining what additional modules to pull in, a manifests/site.pp defining a basic example system and base Hiera configuration.

You can use any Git service, however for this walkthrough, we’ll use Bitbucket since it’s free to setup any number of private repos as their pricing model is on the number of people in a team and is free for under 5 people.

Github’s model of charging per-repo makes the r10k puppet workflow prohibitively expensive, since we need heaps of tiny repos, rather than a few large repos. Which is a shame, since Github has some nice features.

Head to https://bitbucket.org/ and create an account if you don’t already have one. We can use their handy import feature to make a copy of the sample public repo.

Select “Create Repository” and then click the “Import” in the top right corner of the window.

05-bitbucket-create

Now you just need to select “GitHub” as a source with the URL of https://github.com/jethrocarr/pupistry-samplepuppet.git and select a name for your new repo:

06-bitbucket-import

Once the import completes, it will look a bit like this:

07-bitbucket-done

The only computers that need to be able to access this repository is your workstation. The servers themselves never use any of the Git repos, since Pupistry packages up everything it needs into the artifact files.

Finally, if you’re new to Bitbucket, you probably want to import their key into your known hosts file, so Pupistry doesn’t throw errors trying to check out the repo:

ssh-keyscan bitbucket.org >> ~/.ssh/known_hosts

 

4. Configuring Pupistry

At this point we have the AWS S3 bucket, IAM accounts and the Git repo for our control repo in Bitbucket. We can now write the Pupistry configuration file and get started with the tool!

Open ~/.pupistry/settings.yaml with your preferred text editor:

vim ~/.pupistry/settings.yaml

09-config-edit

There are three main sections to configure in the file:

  1. General – We need to define the S3 bucket here. (For our walk though, we are leaving GPG signing disabled, it’s not mandatory and GPG is beyond the scope for this walkthrough):10-config-general
  2. Agent – These settings impact the servers that will be running Pupistry, but you need to set them on your workstation since Pupistry will test them for you and pre-seed the bootstrap data with the settings:11-config-agent
  3. Build – The settings that are used on your workstation to generate artifacts. If you create your own repository in Bitbucket, you need to change the puppetcode variable to the location of your data. If you skipped that step, just leave it on the default sample repo for testing purposes.12-config-use-bitbucket

Make sure you set BOTH the agent and the build section access_key_id and secret_access_key using the output from the CloudFormation build in step 2.

 

5. Building an artifact with Pupistry

Now we have our AWS resources, our control repository and our configuration – we can finally use Pupistry and build some servers!

pupistry build

13-pupistry-build

Since this our first artifact, there won’t be much use to running diff, however as part of diff Pupistry will verify your agent AWS credentials are correct, so it’s worth doing.

pupistry diff

14-pupistry-diff

We can now push our newly built artifact to S3 with:

pupistry push

15-pupistry-push

In regards to the GPG warning – Pupistry interacts with AWS via secure transport and the S3 bucket can only be accessed via authorised accounts, however the weakness is that if someone does manage to break into your account (because you stuck your AWS IAM credentials all over a blog post like a muppet), an attacker could replace the artifacts with malicious ones and exploit your servers.

If you do enable GPG, this becomes impossible, since only signed artifacts will be downloaded and executed by your servers – an invalid artifact will be refused. So it’s a good added security benefit and doesn’t require any special setup other than getting GPG onto your workstation and setting the ID of the private key in the Pupistry configuration file.

We’ve now got a valid artifact. The next step is building our first server with Pupistry!

 

6. Building a server with Pupistry

Because having to install Pupistry and configure it on every server you ever want to build wouldn’t be a lot of fun manually, Pupistry automates this for you and can generate bootstrap scripts for most popular platforms.

These scripts can be run as part of user data on most cloud providers including AWS and Digital Ocean, as well as cut & paste into the root shell of any running server, whether physical, virtual or cloud-based.

The bootstrap process works by:

  1. Using the default OS tools to download and install Pupistry
  2. Write Pupistry’s configuration file and optionally install the GPG public key to verify against.
  3. Runs Pupistry.
  4. Pupistry then pulls down the latest artifact and executes the Puppetcode.
  5. In the case of the sample repo, the Puppetcode includes the puppet-pupistry module. This modules does some clever stuff like setting up a pluginsync equalivent for master-less Puppet and installs a system service for the Pupistry daemon to keep it running in the background – just like the normal Puppet agent! This companion module is strongly recommended for all users.

You can get a list of supported platforms for bootstrap mode with:

pupistry bootstrap

Once you decide which one you’d like to install, you can do:

pupistry bootstrap --template NAME

16-pupistry-bootstrap

Pupistry cleverly fills in all the IAM account details and seeds the configuration file based on the settings defined on your workstation. If you want to change behaviours like disabling the daemon, change it in your build host config file and it will be reflected in the bootstrap file.

 

To test Pupistry you can use any server you want, but this walkthrough shows an example using Digital Ocean which is a very low cost cloud compute provider with a slick interface and much easier learning curve than AWS. You can sign up and use them here, shamelessly clicking on my referrer link so my hosting bill gets paid – but also get yourself $10 credit in the process. Sweetas bru!

Once you have setup/logged into your DigitalOcean account, you need to create a new droplet (their terminology for a VM – AWS uses “EC2 Instance”). It can be named anything you want and any size you want, although this walkthrough is tight and suggests the cheapest example :-)

18-digitalocean-create-droplet-1

 

Now it is possible to just boot the Digital Ocean droplet and then cut & paste the bootstrap script into the running machine, but like most cloud providers Digital Ocean supports a feature called User Data, where a script can be pasted to have it execute when the machine starts up.

19-digitalocean-create-droplet-2

AWS users can get their user data in base64 version as well by calling pupistry bootstrap with the –base64 parameter – handy if you want to copy & paste the user data into other files like CloudFormation stacks. Digital Ocean just takes it in plain text like above.

Make sure you use the right bootstrap data for the right distribution. There are variations between distributions and sometime even between versions, hence various different bootstrap scripts are provided for the major distributions. If you’re using something else/fringe, you might have to do some of your own debugging, so recommend testing with a major distribution first.

20-digitalocean-create-droplet-3

Once you create your droplet, Digital Ocean will go away for 30-60 seconds and build and launch the machine. Once you SSH into it, tail the system log to see the user data executing in the background as the system completes it’s inaugural startup. The bootstrap script echos all commands it’s running and output into syslog for debugging purposes.

21-digitalocean-connect-to-server

 

Watch the log file until you see the initial Puppet run take place. You’ll see Puppet output followed by Notice: Finished catalog run at some stage of the process. You’ll also see the Pupistry daemon has launched and is now running in the background checking for updates every minute.

21-initial-pupistry-run

If you got this far, you’ve just done a complete build and proven that Pupistry can run on your server without interruption – because of the user data feature, you can easily automate machine creation & pupistry run to complete build servers without ever needing to login – we only logged in here to see what was going on!

 

7. Using Pupistry regularly

To make rolling out Puppet changes quick and simply, Pupistry sets up a background daemon job via the puppet-pupistry companion module which installs init config for most distributions for systemd, upstart and sysvinit. You can check the daemon status and log output on systemd-era distributions with:

service pupistry status

21-pupistry-daemon-details

If you want to test changes, then you probably may want to stop the daemon whilst you do your testing. Or you can be *clever* and use branches in your control repo – Pupistry daemon defaults to the master branch.

When testing or not using the daemon, you can run Pupistry manually in the same way that you can run the Puppet agent manually:

pupistry apply

22-pupistry-manual

Play around with some of the commands you can do, for example:

Run and only show what would have been done:

pupistry apply --noop

Apply a specific branch (this will work with the sample repo):

pupistry apply --environment exampleofbranch

To learn more about what commands can be run in apply mode, run:

pupistry help apply

 

 

8. Making a change to your control repo

At this point, you have a fully working Pupistry setup that you can experiment with and try new things out. You will want to check out the repo from bitbucket with:

git clone <repo>

Screen Shot 2015-05-10 at 23.31.02

 

Your first change you might want to make is experimenting with changing some of the examples in your repository and pushing a new artifact:

23-custom-puppetcode-1

 

When Puppet runs, it reads the manifests/site.pp file first for any node configuration. We have a simple default node setup that takes some actions like using some notify resources to display messages to the user. Try changing one of these:

24-custom-puppetcode-2

Make a commit & push the change to Bitbucket, then build a new artifact:

25-custom-puppetcode-3

 

We can now see the diff command in action:

26-custom-puppetcode-4

 

If you’re happy with the changes, you can then push your new artifact to S3 and it will quickly deploy to your servers within the next minute if running the daemon.

27-custom-puppetcode-5

You can also run the Pupistry apply manually on your target server to see the new change:

28-custom-puppetcode-6

At this point you’ve been able to setup AWS, setup Git, setup Pupistry, build a server and push new Puppet manifests to it! You’re ready to begin your exciting adventure into master-less Puppet and automate all the things!

 

9. Cleanup

Hopefully you like Pupistry and are now hooked, but even if you do, you might want to cleanup everything you’ve just created as part of this walkthrough.

First you probably want to destroy your Digital Ocean Droplet so it doesn’t cost you any further money:

29-cleanup-digitialocean

If you want to keep continuing with Pupistry with your new Pupistry Bitbucket control repo and your AWS account you can, but if you want to purge them to clean up and start again:

Delete the BitBucket repo:

30-cleanup-bitbucket

Delete the AWS S3 bucket contents, then tear down the CloudFormation stack to delete the bucket and the users:

31-cleanup-aws

All done – you can re-run this tutorial from clean, or use your newfound knowledge to setup your proper production configuration.

 

Further Information

Hopefully you’ve found this walkthrough (and Pupistry) useful! Before getting started properly on your Pupistry adventure, please do read the full README.md and also my introducing Pupistry blog post.

 

Pupistry is a very new application, so if you find bugs please file an issue in the tracker, it’s also worth checking the tracker for any other known issues with Pupistry before getting started with it in production.

Pull requests for improved documentation, bug fixes or new features are always welcome.

If you are stuck and need support, please file it via an issue in the tracker. If your issue relates *directly* to a step in this tutorial, then you are welcome to post a comment below. I get too many emails, so please don’t email me asking for support on an issue as I’ll probably just discard it.

You may also find the following useful if you’re new to Puppet:

Remember that Pupistry advocates the use of masterless Puppet techniques which isn’t really properly supported by Puppetlabs, however generally Puppet modules will work just fine in master-less as well as master-full environments.

Puppet master is pretty standard, whereas Puppet masterless implementations differ all over the place since there’s no “proper” way of doing it. Pupistry hopefully fills this gap and will become a defacto standard for masterless over time.

 

 

FreeBSD in the cloud

This weekend I was playing around with FreeBSD in order to add support to Pupistry. Although I generally use Linux exclusively, it’s fun to play around with other platforms now and then, bit like going on vacation. Plus building support for other platforms ensures that I’m writing code that’s more portable.

FreeBSD is probably the most popular BSD in use and it’s the only one available for download from the Amazon Web Services (AWS) Marketplace and as a supported platform from Digital Ocean alongside their Linux offerings.

However as popular as FreeBSD is, it pales in comparison to Linux, which means that it doesn’t get as much love and things don’t work quite as seamlessly with these cloud providers. In my process of testing FreeBSD with both providers I ran into some interesting feature differences and annoyances.

 

FreeBSD on Digital Ocean

I started with Digital Ocean first, love them since they’re a nice simple, cheap cloud provider for personal stuff – not much need for the AWS enterprise feature set when I’m building personal machines and paying the price of a coffee for a month of compute sure is nice.

They provide a FreeBSD 10.1 image via the usual droplet creation screen, I have to give Digital Ocean credit for such a nice clean simple interface – limiting user selection does make it much more approachable for people, something Apple always understood with their products.

Screen Shot 2015-04-18 at 23.30.50

As always Digital Ocean is pretty speedy, bringing up a machine within a minute or so. Once ready, login as the freebsd user and you can just sudo to root.

Digital Ocean provides a pretty recent image with pkg already installed and ready to go, although you’ll want to run the update process to get the latest patches. You need to login initially as the freebsd user and then can sudo to acquire root powers.

Over all it’s great – so naturally there is a catch. Digital Ocean doesn’t yet support user data with their droplets. So whilst you can fill in the user data field, it won’t actually get executed.

This is pretty annoying for anyone wanting to automate large number of machines, since it now means you have to SSH to each of them to get them provisioned. I’ve raised a question on their community forum around this issue, but I wouldn’t expect a quick fix since the upstream bsd-cloudinit project they use hasn’t implemented support yet either.

It’s not going to be an end-of-the-world for most people, but it could be barrier if you’re wanting to roll out a fleet of BSD boxen.

The best feature from Digital Ocean is actually their documentation – with the launch of FreeBSD on their platform, they’ve produced some excellent tutorials and guides to their platform which can be found here and are useful to both Linux gurus and noobs alike.

Finally their native IPv6 support extends to FreeBSD, so your machines can join the internet of the 21st century from day one.

 

FreeBSD on Amazon Web Services (AWS)

Next I spun up an instance in Amazon Web Services (AWS) which is the granddaddy of cloud providers and provides an impressive array of functionality, although this comes at a cost premium over Digital Ocean’s tight pricing.

It’s actually been the first time in a long time that I’ve built a machine via the AWS web console, normally for work we just build all of our systems via Cloud Formation and it was an interesting experience to see the usability difference of AWS’s setup page vs that of Digital Ocean’s.

The fact that the launch wizard has 7 different screens says a lot and I suspect AWS is at risk of having it’s consumer user base eaten by the likes of Linode and Digital Ocean – but when a consumer user is paying $5.00 a month and an enterprise customer pays $300,000 a month, I suspect AWS isn’t going to be too worried.

Launching a FreeBSD instance is not really any different to that of a Linux one, you just need to search for “freebsd” in the AWS Market Place to find the AMI and launch as normal.

Screen Shot 2015-04-18 at 23.43.18

 

Once launched, things get more interesting. Digital Ocean’s FreeBSD instance came up in around 1 minute which is standard for their systems – but AWS took a whopping 8-10mins to launch the AMI to the level where I could login via SSH!

Digging into the startup log reveals why – it seems the AWS AMI (Amazon’s machine images/templates) for FreeBSD launches the instance, then runs a prolonged upgrade task (freebsd-update fetch install), before doing a subsequent reboot and finally starting SSH.

Whilst I appreciate the good default security posture this provides, there’s a few issues with it:

  1. It differs from most other AWS images which deal with patching by having new images built semi-frequently and leaving the patching in-between up to the admin’s choice.
  2. During the whole process, the admin can’t login which causes some confusion. I initially assumed the AMI images were broken after reviewing my security groups and seeing no reason why I shouldn’t be able to login immediately.
  3. You can’t trust the AMI images to be a solid unchanging base, which means you need to be vary wary if doing autoscaling. Not only is 10mins a bit too slow for autoscaling, having the potential risk of it not coming up due to app changes in the latest update is always something to watch out for. If doing autoscaling with these images, you’ll need to consider
  4. It caused me no end of frustration when trying to test user data since I had to wait 10mins each time to get a confirmation of my test!

The last point brings me to user data – the good news is that Amazon correctly supports user data for FreeBSD machines, so you can paste in your tcsh script (not bash remember!) and it will get invoked at launch time.

The downside is that the user data handling of FreeBSD is a lot more fragile than Linux images. Generally with Linux, the OS boots (including SSH) and then runs the user data. If the user data breaks or hangs or does anything other than expected, you can still login and debug. Whereas since FreeBSD runs the user data before starting up SSH, if something goes wrong you have no way to easily login and debug. And given the differences between tcsh and bash plus annoying commands that default to expecting user input on non-interactive ptys, changes are you’ll have more than one attempt that results in a machine getting stuck at launch.

The ultimate fix is that you’ll probably have to use Packer if using FreeBSD in any serious way on AWS to get the startup performance to an acceptable level.

Finally remember that on AWS, you need to login as the ec2-user and then su –  to become root.

 

Which one?

If you’re interested in FreeBSD and want to pick a provider to play around with, the choice seems pretty simple to me – Digital Ocean. They’re got the better pricing (~ $5/month vs $15/month) and their ridiculously simple dashboard coupled with the excellent documentation they’ve assembled makes it really attractive for anyone new to the *.nix or cloud space. Plus they’ve bothered to invest in IPv6 which I appreciate.

However if you’re doing business/enterprise systems and want user data, autoscaling or the benefit of automating entire stacks with Cloud Formation, then you will probably find AWS the more attractive offering purely due to the additional functionality offered by that platform. Just be prepared to spend a bit of time baking your own AMI to allow you to skip the overhead of having to wait for updates to apply for each instance you bring up.

Neither provider has got their FreeBSD experience to be quite as slick as that of their Linux offerings, however hopefully they improve on these deficiencies over time –  there’s not much needed to get the experience up to the same level as Linux distributions and it’s nice having a different type of unix to play with for a change.

Recovering SW RAID with Ubuntu on Amazon AWS

Amazon’s AWS cloud service is a very popular and generally mature offering, but it does have it’s issues at times – in particular it’s storage options and limited debug facilities.

When using AWS, you have three main storage options for your instances (virtual machine servers):

  1. Ephemeral disk , storage attached locally to your instance which is lost at shutdown or if the instance terminates unexpectedly. A fixed amount is included with your instance, the size set depending on your instance size.
  2. Elastic Block Storage (EBS) which is a network-attached block storage exposed to your Linux instance as if it was a traditional local disk.
  3. EBS with provisioned IOPs – the same as the above, but with guarantees around performance – for a price of course. ;-)

With EBS, there’s no need to use RAID from a disk reliability perspective- the EBS volume itself has it’s own underlying redundancy (although one should still perform snapshots and backups to handle end user failure or systematic EBS failure), which is the common reason for using RAID with conventional physical hosts.

So with RAID being pointless for redundancy in an Amazon world, why write about recovering hosts in AWS using software RAID? Because there are still situations where you may end up using it for purposes other than redundancy:

  1. Poor man’s performance gains – EBS provisioned IOPs are the proper way of getting performance from EBS to meet your particular requirements. But it comes with a cost attached – you pay increasingly more for faster disk, but also need proportionally larger disks minimum sizes to go with the higher speeds (10:1 ratio IOPs:size) which can quickly make a small fast volume prohibitively expensive. A software RAID array can allow you to get more performance by combining numerous small volumes together at low cost.
  2. Merging multiple EBS volumes – EBS volumes have an Amazon-imposed limit of 1TB per volume. If a single filesystem of more than 1TB is required, either LVM or software RAID is needed to merge them.
  3. Merging multiple ephemeral volumes – software RAID can be used to also merge the multiple EBS volumes that Amazon provides on some larger instances. However being ephemeral, if your RAID gets degraded, there’s no need to repair it – just destroy the instance and build a nice new one.

So whilst using software RAID with your AWS Instances can be a legitimate exercise, it can also introduce it’s own share of issues.

Firstly you can no longer use EBS snapshotting to do backups of the EBS volumes, unless you first halt the entire RAID array/freeze the filesystem writes for the duration of all the snapshots to be created – which depending on your application may or may not be feasible.

Secondly you now have the issue of increased complexity of your I/O configuration. If using automation to build your instances, you need to do additional work to handle the setup of the array which is a one-time investment, but the use of RAID also adds complexity to the maintenance (such as resizes) and increases the risk of a fault occurring.

I recently had the excitement/misfortune of such an experience. We had a pair of Ubuntu 12.04 LTS instances using GlusterFS to provide a redundant NFS mount to some of our legacy applications running in AWS (AWS unfortunately lacks a hosted NFS filer service). To provide sufficient speed to an otherwise small volume, RAID 0 had been used with a number of small EBS volumes.

The RAID array was nearly full, so a resize/grow operation was required. This is not an uncommon requirement and just involves adding an EBS volume to the instance, growing the RAID array size and expanding the filesystem on top. Unfortunately something nasty happened between Gluster and the Linux kernel, where the RAID resize operation on one of the two hosts suddenly triggered a kernel panic and failed, killing the host. I wasn’t able to get the logs for it, but at this stage it looks like gluster tried to do some operation right when the resize was active and instead of being blocked, triggered a panic.

Upon a subsequent restart, the host didn’t come back online. Connecting to the AWS Instance’s console output (ec2-get-console-output <instanceid>) showed that the RAID array failure was preventing the instance from booting back up, even through it was an auxiliary mount, not the root filesystem or anything required to boot.

The system may have suffered a hardware fault, such as a disk drive
failure.  The root device may depend on the RAID devices being online. One
or more of the following RAID devices are degraded:
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : inactive xvdn[9](S) xvdm[7](S) xvdj[4](S) xvdi[3](S) xvdf[0](S) xvdh[2](S) xvdk[5](S) xvdl[6](S) xvdg[1](S)
      13630912 blocks super 1.2

unused devices: <none>
Attempting to start the RAID in degraded mode...
mdadm: CREATE user root not found
mdadm: CREATE group disk not found
[31761224.516958] bio: create slab <bio-1> at 1
[31761224.516976] md/raid:md0: not clean -- starting background reconstruction
[31761224.516981] md/raid:md0: reshape will continue
[31761224.516996] md/raid:md0: device xvdm operational as raid disk 7
[31761224.517002] md/raid:md0: device xvdj operational as raid disk 4
[31761224.517007] md/raid:md0: device xvdi operational as raid disk 3
[31761224.517013] md/raid:md0: device xvdf operational as raid disk 0
[31761224.517018] md/raid:md0: device xvdh operational as raid disk 2
[31761224.517023] md/raid:md0: device xvdk operational as raid disk 5
[31761224.517029] md/raid:md0: device xvdl operational as raid disk 6
[31761224.517034] md/raid:md0: device xvdg operational as raid disk 1
[31761224.517683] md/raid:md0: allocated 10592kB
[31761224.517771] md/raid:md0: cannot start dirty degraded array.
[31761224.518405] md/raid:md0: failed to run raid set.
[31761224.518412] md: pers->run() failed ...
mdadm: failed to start array /dev/md0: Input/output error
mdadm: CREATE user root not found
mdadm: CREATE group disk not found
Could not start the RAID in degraded mode.
Dropping to a shell.

BusyBox v1.18.5 (Ubuntu 1:1.18.5-1ubuntu4.1) built-in shell (ash)
Enter 'help' for a list of built-in commands.

Dropping to a shell during bootup problems is an approach that has differing perspectives – personally I want my hosts to boot regardless of how messed up things are so I can get SSH, but others differ and prefer the safety of halting and dropping to a recovery shell for the sysadmin to resolve. Ubuntu is configured to do the latter by default.

But regardless of your views on this subject, dropping to a shell leaves you stuck when running AWS instances, since there is no way to interact with this console – Amazon doesn’t have a proper console for interacting with instances like a traditional VPS provider, you’re limited to only seeing the console log.

Ubuntu’s documentation actually advises that in the event of a failed RAID array, you can still force a boot by setting a kernel option bootdegraded=true. This helps if the array was degraded, but in this case the array had entirely failed, rather than being degraded, and Ubuntu treats that differently.

Thankfully it is possible to recover the failed instance, by attaching it’s volume to another instance, adjusting the initramfs to allow booting even whilst the RAID is failed and then once booted, you can do a repair on the host itself.

To do this repair you require an additional Linux instance to use as a recovery host and the Amazon CLI tools to be installed on your workstation.

# Set some variables with your instance IDs (eg i-abcd3)
export FAILED=setme
export RECOVERY=setme

# Fetch the root filesystem EBS volume ID and set a var with it:
export VOLUME=vol-setme

# Now stop the failed instance, so we can detach it's root volume.
# (Note: wait till status goes from "stopping" to "stopped")
ec2-stop-instances --force $FAILED
ec2-describe-instances $FAILED | grep INSTANCE | awk '{ print $5 }'

# Attach the root volume to the recovery host as /dev/sdo
ec2-detach-volume $VOLUME -i $FAILED
ec2-attach-volume $VOLUME -i $RECOVERY -d /dev/sdo

# Mount the root volume on the recovery host
ssh recoveryhost.example.com
mkdir /mnt/recovery
mount /dev/sdo /mnt/recovery

# Disable raid startup scripts for initramfs/initrd. We need to
# unpack the old file and modify the startup scripts inside it.
cp /mnt/recovery/boot/initrd.img-LATESTHERE-virtual /tmp/initrd-old.img
cd /tmp/
mkdir initrd-test
cd initrd-test
cpio --extract < ../initrd-old.img
vim scripts/local-premount/mdadm
- degraded_arrays || exit 0
- mountroot_fail || panic "Dropping to a shell."
+ #degraded_arrays || exit 0
+ #mountroot_fail || panic "Dropping to a shell."
find . | cpio -o -H newc > ../initrd-new.img
cd ..
gzip initrd-new.img
cp initrd-new.img.gz /mnt/recovery/boot/initrd.img-LATESTHERE-virtual

# Disable mounting of filesystem at boot (otherwise startup process
# will fail despite the array being skipped).
vim /mnt/recovery/etc/fstab
- /dev/md0    /mnt/myraidarray    xfs    defaults    1    2
+ #/dev/md0    /mnt/myraidarray    xfs    defaults    1    2

# Work done, umount volume.
umount /mnt/recovery

# Re-attach the root volume back to the failed instance
ec2-detach-volume $VOLUME -i $RECOVERY
ec2-attach-volume $VOLUME -i $FAILED -d /dev/sda1

# Startup the failed instance.
# (Note: Wait for status to go from pending to running)
ec2-start-instances $FAILED
ec2-describe-instances $FAILED | grep INSTANCE | awk '{ print $5 }'

# Watch the startup console. Note: java.lang.NullPointerException
# means that there is no output from the console yet.
ec2-get-console-output $FAILED

# Host should startup, you can get access via SSH and repair RAID
# array via usual means.

The above is very Ubuntu-specific, but the techniques shown are transferable to other platforms as well – just note that the scripts inside the initramfs/initrd will vary per distribution, it’s one of the components of a GNU/Linux system that is completely specific to the distribution vendor.

Route53 with NamedManager 1.8.0

Just released NamedManager 1.8.0, my open source web-based DNS management tool. This release fixes some bugs with MySQL 5.6 and internationalized domain names, but also includes support for using Amazon AWS Route53 alongside the existing Bind9 support.

Just add a name server entry with the type of Route53 and your Amazon credentials and a background process will sync all DNS changes to Route53. You can mix and match thanks to the groups feature, so if you want some zones going to both Bind9 and Route53 and others going to just Route53 or Bind9, you can do so.

NamedManager, now with cloudy goodness.

NamedManager, now with cloudy goodness.

As always, the easiest installation is from the provided RPMs, however you can also install from tarball or from Git – just refer to the installation documentation.

This feature is considered stable, however it is new, so be wary for bugs and issues – and report any issues you encounter back to me via email or the project manager issue tracker.