Amazon’s AWS cloud service is a very popular and generally mature offering, but it does have it’s issues at times – in particular it’s storage options and limited debug facilities.
When using AWS, you have three main storage options for your instances (virtual machine servers):
- Ephemeral disk , storage attached locally to your instance which is lost at shutdown or if the instance terminates unexpectedly. A fixed amount is included with your instance, the size set depending on your instance size.
- Elastic Block Storage (EBS) which is a network-attached block storage exposed to your Linux instance as if it was a traditional local disk.
- EBS with provisioned IOPs – the same as the above, but with guarantees around performance – for a price of course. ;-)
With EBS, there’s no need to use RAID from a disk reliability perspective- the EBS volume itself has it’s own underlying redundancy (although one should still perform snapshots and backups to handle end user failure or systematic EBS failure), which is the common reason for using RAID with conventional physical hosts.
So with RAID being pointless for redundancy in an Amazon world, why write about recovering hosts in AWS using software RAID? Because there are still situations where you may end up using it for purposes other than redundancy:
- Poor man’s performance gains – EBS provisioned IOPs are the proper way of getting performance from EBS to meet your particular requirements. But it comes with a cost attached – you pay increasingly more for faster disk, but also need proportionally larger disks minimum sizes to go with the higher speeds (10:1 ratio IOPs:size) which can quickly make a small fast volume prohibitively expensive. A software RAID array can allow you to get more performance by combining numerous small volumes together at low cost.
- Merging multiple EBS volumes – EBS volumes have an Amazon-imposed limit of 1TB per volume. If a single filesystem of more than 1TB is required, either LVM or software RAID is needed to merge them.
- Merging multiple ephemeral volumes – software RAID can be used to also merge the multiple EBS volumes that Amazon provides on some larger instances. However being ephemeral, if your RAID gets degraded, there’s no need to repair it – just destroy the instance and build a nice new one.
So whilst using software RAID with your AWS Instances can be a legitimate exercise, it can also introduce it’s own share of issues.
Firstly you can no longer use EBS snapshotting to do backups of the EBS volumes, unless you first halt the entire RAID array/freeze the filesystem writes for the duration of all the snapshots to be created – which depending on your application may or may not be feasible.
Secondly you now have the issue of increased complexity of your I/O configuration. If using automation to build your instances, you need to do additional work to handle the setup of the array which is a one-time investment, but the use of RAID also adds complexity to the maintenance (such as resizes) and increases the risk of a fault occurring.
I recently had the excitement/misfortune of such an experience. We had a pair of Ubuntu 12.04 LTS instances using GlusterFS to provide a redundant NFS mount to some of our legacy applications running in AWS (AWS unfortunately lacks a hosted NFS filer service). To provide sufficient speed to an otherwise small volume, RAID 0 had been used with a number of small EBS volumes.
The RAID array was nearly full, so a resize/grow operation was required. This is not an uncommon requirement and just involves adding an EBS volume to the instance, growing the RAID array size and expanding the filesystem on top. Unfortunately something nasty happened between Gluster and the Linux kernel, where the RAID resize operation on one of the two hosts suddenly triggered a kernel panic and failed, killing the host. I wasn’t able to get the logs for it, but at this stage it looks like gluster tried to do some operation right when the resize was active and instead of being blocked, triggered a panic.
Upon a subsequent restart, the host didn’t come back online. Connecting to the AWS Instance’s console output (ec2-get-console-output <instanceid>) showed that the RAID array failure was preventing the instance from booting back up, even through it was an auxiliary mount, not the root filesystem or anything required to boot.
The system may have suffered a hardware fault, such as a disk drive failure. The root device may depend on the RAID devices being online. One or more of the following RAID devices are degraded: Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : inactive xvdn[9](S) xvdm[7](S) xvdj[4](S) xvdi[3](S) xvdf[0](S) xvdh[2](S) xvdk[5](S) xvdl[6](S) xvdg[1](S) 13630912 blocks super 1.2 unused devices: <none> Attempting to start the RAID in degraded mode... mdadm: CREATE user root not found mdadm: CREATE group disk not found [31761224.516958] bio: create slab <bio-1> at 1 [31761224.516976] md/raid:md0: not clean -- starting background reconstruction [31761224.516981] md/raid:md0: reshape will continue [31761224.516996] md/raid:md0: device xvdm operational as raid disk 7 [31761224.517002] md/raid:md0: device xvdj operational as raid disk 4 [31761224.517007] md/raid:md0: device xvdi operational as raid disk 3 [31761224.517013] md/raid:md0: device xvdf operational as raid disk 0 [31761224.517018] md/raid:md0: device xvdh operational as raid disk 2 [31761224.517023] md/raid:md0: device xvdk operational as raid disk 5 [31761224.517029] md/raid:md0: device xvdl operational as raid disk 6 [31761224.517034] md/raid:md0: device xvdg operational as raid disk 1 [31761224.517683] md/raid:md0: allocated 10592kB [31761224.517771] md/raid:md0: cannot start dirty degraded array. [31761224.518405] md/raid:md0: failed to run raid set. [31761224.518412] md: pers->run() failed ... mdadm: failed to start array /dev/md0: Input/output error mdadm: CREATE user root not found mdadm: CREATE group disk not found Could not start the RAID in degraded mode. Dropping to a shell. BusyBox v1.18.5 (Ubuntu 1:1.18.5-1ubuntu4.1) built-in shell (ash) Enter 'help' for a list of built-in commands.
Dropping to a shell during bootup problems is an approach that has differing perspectives – personally I want my hosts to boot regardless of how messed up things are so I can get SSH, but others differ and prefer the safety of halting and dropping to a recovery shell for the sysadmin to resolve. Ubuntu is configured to do the latter by default.
But regardless of your views on this subject, dropping to a shell leaves you stuck when running AWS instances, since there is no way to interact with this console – Amazon doesn’t have a proper console for interacting with instances like a traditional VPS provider, you’re limited to only seeing the console log.
Ubuntu’s documentation actually advises that in the event of a failed RAID array, you can still force a boot by setting a kernel option bootdegraded=true. This helps if the array was degraded, but in this case the array had entirely failed, rather than being degraded, and Ubuntu treats that differently.
Thankfully it is possible to recover the failed instance, by attaching it’s volume to another instance, adjusting the initramfs to allow booting even whilst the RAID is failed and then once booted, you can do a repair on the host itself.
To do this repair you require an additional Linux instance to use as a recovery host and the Amazon CLI tools to be installed on your workstation.
# Set some variables with your instance IDs (eg i-abcd3) export FAILED=setme export RECOVERY=setme # Fetch the root filesystem EBS volume ID and set a var with it: export VOLUME=vol-setme # Now stop the failed instance, so we can detach it's root volume. # (Note: wait till status goes from "stopping" to "stopped") ec2-stop-instances --force $FAILED ec2-describe-instances $FAILED | grep INSTANCE | awk '{ print $5 }' # Attach the root volume to the recovery host as /dev/sdo ec2-detach-volume $VOLUME -i $FAILED ec2-attach-volume $VOLUME -i $RECOVERY -d /dev/sdo # Mount the root volume on the recovery host ssh recoveryhost.example.com mkdir /mnt/recovery mount /dev/sdo /mnt/recovery # Disable raid startup scripts for initramfs/initrd. We need to # unpack the old file and modify the startup scripts inside it. cp /mnt/recovery/boot/initrd.img-LATESTHERE-virtual /tmp/initrd-old.img cd /tmp/ mkdir initrd-test cd initrd-test cpio --extract < ../initrd-old.img vim scripts/local-premount/mdadm - degraded_arrays || exit 0 - mountroot_fail || panic "Dropping to a shell." + #degraded_arrays || exit 0 + #mountroot_fail || panic "Dropping to a shell." find . | cpio -o -H newc > ../initrd-new.img cd .. gzip initrd-new.img cp initrd-new.img.gz /mnt/recovery/boot/initrd.img-LATESTHERE-virtual # Disable mounting of filesystem at boot (otherwise startup process # will fail despite the array being skipped). vim /mnt/recovery/etc/fstab - /dev/md0 /mnt/myraidarray xfs defaults 1 2 + #/dev/md0 /mnt/myraidarray xfs defaults 1 2 # Work done, umount volume. umount /mnt/recovery # Re-attach the root volume back to the failed instance ec2-detach-volume $VOLUME -i $RECOVERY ec2-attach-volume $VOLUME -i $FAILED -d /dev/sda1 # Startup the failed instance. # (Note: Wait for status to go from pending to running) ec2-start-instances $FAILED ec2-describe-instances $FAILED | grep INSTANCE | awk '{ print $5 }' # Watch the startup console. Note: java.lang.NullPointerException # means that there is no output from the console yet. ec2-get-console-output $FAILED # Host should startup, you can get access via SSH and repair RAID # array via usual means.
The above is very Ubuntu-specific, but the techniques shown are transferable to other platforms as well – just note that the scripts inside the initramfs/initrd will vary per distribution, it’s one of the components of a GNU/Linux system that is completely specific to the distribution vendor.
Jethro
you saved me
Did exactly the way you described
The only thing was with “cpio –extract < ../initrd-old.img"
It filed
I used two commands before
In /tmp dir:
$ mv initrd-old.img initrd-old.img.gz
$ gzip -d initrd-old.img.gz
And after that
$ cd initrd-test
$ cpio –extract < ../initrd-old.img
Thank you again!
Thankyou So much Jethro and Alex