Tag Archives: geek

Anything IT related (which is most things I say) :-)

Resize ext4 on RHEL/CentOS/EL

I use an ext4 filesystem for a couple large volumes on my virtual machines. I know that btrfs and ZFS are the new cool kids in town for filesystems, but when I’m already running RAID and LVM on the VM server underneath the guest, these newer filesystems with their fancy features don’t add a lot of value for me, so a good, simple, traditional, reliable filesystem is just what I want.

There’s actually a bit of confusion and misinformation online about using ext4 filesystems with RHEL/CentOS (EL) 5 & 6 systems, so just wanted to clarify some things for people.

ext4 uses the same tools as ext3 and ext2 before it – this means still using the e2fsprogs package which provides utilities such as e2fsck and resize2fs, which handle ext2, ext3 and ext4 filesystems with the same command. But there are a few catches with EL…

On an EL 5 system, the version of e2fsprogs is too old and doesn’t support ext4. Therefore there is an additional package called “e4fsprogs” which is actually just a newer version of e2fsprogs that provides renamed tools such as “resize4fs”. This tends to get confusing for users who then try to find these tools on EL 6 or other distributions, so it’s important to be aware that it’s a non-standard thing. Why they didn’t just upgrade the tools/backport the ext4 support is a bit weird to me, especially considering the fact these tools are extremely backwards compatible and should meet even Red Hat’s API compatibility requirements, but it-is-what-it-is sadly.

On an EL 6 system, the version of e2fsprogs is modern enough to support ext4, which means you need to use the normal “resize2fs” command to do an ext4 filesystem resize (as per the docs). Generally this works fine, but I think I may have found a bug with the stock EL 6 kernel and e2fsprog tools.

One of my filesystems almost reached 100% full, so I expanded the underlying block volume and then issued an offline e2fsck -f /dev/volume & resize2fs /dev/volume which completed as normal without error, however it left the filesystem size unchanged. Repeated checks and resize attempts made no difference. The block volume correctly reported it’s new size, so it wasn’t a case of the kernel not having seen the changed size.

However by mounting the ext4 filesystem and doing an online resize, the resize works correctly, although quite slowly, as the online resize seems to trickle-resize the disk, gradually increasing it’s size until finally complete.

This inability to resize offline is not something I’ve come across before, so may be a rare bug trigged by the full size of the filesystem that could well be fixed in newer kernels/e2fsprog packages, but figured I’d mention it for any other poor sysadmins scratching their heads over a similar issue.

Finally, be aware that you need to use either no partition table, or GPT if you want file systems over 2TB and also that the e2fsprogs package shipped with EL 6 has a 16TB limit, so you’ll have to upgrade the package manually if you need more than that.

I’m Jethro Carr, open source geek, and this is how I work

Lifehacker regularly features a segment where they interview famous people and ask them how they work. Rather than waiting for the e-mail that would never come has yet to come, my friend Jack Scott decided to answer this set of questions on his own last week,then tapped my other friend Chris Neugebauer who then tapped me to answer them after him.

Why hello there.

Hello Good Sir/Madam, care to converse?

Location: Sydney, Australia. But my heart is still home in Wellington, New Zealand.

Current Gig: Senior Systems Engineer at Fairfax Media Australia. I’m part of a small team that looks after all of their AU websites (SMH, TheAge, Domain, RSVP, etc), responsible for building and managing the server infrastructure from hypervisor to application.

Current Mobile Device: Google/Samsung Galaxy Nexus Android phone. I have a love-hate relationship with how it enables me to be hyper-connected all the time.

Current Computer: At work I use an under-speced Lenovo X1 Carbon with Ubuntu Linux. In personal life it’s more interesting, with an aging 2010-era Lenovo Thinkpad X201i upgraded to 512GB SSD that still has it thrashing current market offerings, a 1U IBM xSeries 306m server for production websites and a large custom-built full tower server for files and dev VMs. I run a mixture of Debian and CentOS GNU/Linux distributions mostly. I did a post a while back on my setup here.

One word that best describes how you work: Hard

What apps/software/tools can’t you live without? Apps come and go over time… fundamentally for me my killer app is a GNU/Linux operating system with all the wonderful open source tools that it packs in. I’ve tried using both Windows and MacOS over the years, but have found that I’m always more productive with a GNU/Linux machine with many thousands of apps and tools just an apt-get install or yum install away.

What’s your workplace like? My “work” workplace is a pretty standard office, Fairfax has kitted it out with somewhat decent chairs and 27″ LCDs to hotdesk in. My home workspace these days is sadly just a couch or an unstable dining table. It’s very suboptimal, but as I’m only spending a limited time in AU and didn’t want to buy lots of stuff, I make do. My ideal setup looks more like what I had back in Auckland.

What’s your best time-saving trick/life hack?  Make the effort to do things right the first time. Seriously, sloppy work, designs or documentation leads to huge time losses whilst working around issues or having to fix them at a later time.

What’s your favourite to-do list manager? For work life, it’s JIRA. We use it heavily at Fairfax and I assign all my priorities different tickets, it makes delegating tasks to other team members or splitting big bits of work into easily manageable chunks possible.

Besides your phone and computer, what gadget can’t you live without?  Besides my computer?? The only gadget I care about strongly is my laptop, everything else is just “stuff” that I could take or leave. I’m pretty minimalist these days, could list all my gadgets on one hand.

What everyday thing are you better at than anyone else? What’s your secret?
Solving complex issues. I can take any problem and quickly determine the issue, the possible fixes and test and implement these. It’s not specific to IT, I’m good in a crisis and can solve issues quickly IRL as well. It comes naturally to me and it’s one of the things that allows me to be very effective in my job.

What do you listen to while you work? I don’t tend to listen so much whilst at work, but I certainly do when relaxing on personal projects. I have a wide range of tastes, my often played includes bands such as Kraftwork, Genesis, Nightwish, Eluvite, Ensiferum, Adele, The Killers, Meat Loaf, Mumford and Sons, Marillion, Muse, Rammstein, Tangerine Dream and many others that are far apart in genre.

What are you currently reading? Fairfax: The Rise And Fall. Lisa picked it up recently and it’s interesting reading a bit of the background behind the company I currently work for.

Are you more of an introvert or extrovert? An extroverted introvert. :-) I’m certainly an introvert in that I prefer small groups of close friends and staying in over going out, but at the same time I get on well with people and can happily go out and make friends without too much trouble.

What’s your sleep routine like? What’s sleep? I tend to go to sleep around 00:00-01:00 and get up at 07:30-08:00. It’s not the best, but there’s always so much to do in one day!

Fill in the blank. I’d love to see _____ answer these same questions. I’ve got many people in mind, but they tend not to have blogs. :-( So I’m going with Hamzah Khan, he has a pretty cool blog and I’m envious of his network racks. But I invite anyone else reading this to join in as well, don’t wait for an invite. :-)

What’s the best advice you’ve ever received? Invest in good tools. Whether it’s software or a hammer, either way bad tools will cause you pain, lost time, wasted money and endless frustration.

Is there anything else you want to add for readers? If you don’t currently blog, please do! It’s a fulfilling activity and I love reading blogs by my friends rather than throwaway 140char one liners and so many of you whom don’t blog have such interesting content that you could put up.

Enterprise or Consumer Spinning Rust Platters?

IMG_20120704_203427I recently wrote about bad hard disks being responsible for impacting array performance negatively after having some consumer grade disks fail in a fashion that impacted performance, but didn’t result in the disk being marked as bad.

Since then I’ve been doing more research into the differences between consumer and enterprise disks after noting that consumer SATA disks appear to be more susceptible to this sort of performance degrading failure behaviour than enterprise disks which fail cleaner/faster, but also have a much higher purchase cost.

Consumer disks are built with the exception that they’ll be running standalone in a desktop computer where spending a few seconds remapping some bad sectors or running healing procedures is better than data loss. But this messes with the performance when in RAID arrays and leads to drives with poor latency or drives that try to keep correcting and hiding failing sectors from the array controller.

Enterprise SATA disks are mostly the same from a hardware perspective, however they have a different firmware load designed with the assumption that the disk is part of a RAID array. If an enterprise disk has a failure, it should die quickly and cleanly so that the RAID array can then handle the process of repairing – after all, the array has parity information and can rebuild a new disk, it doesn’t need a failing disk to try and rescue itself.

I did some digging on the technical differences between enterprise and consumer disks – the information can be tricky to find with so many people making blind recommendations for either option based on anecdotal evidence and hearsay – but I did manage to dig up some useful articles on the subject:

When I built my file server a couple years ago, I purchased 8x standard consumer grade Seagate 7200.12 disks and 2x enterprise grade Seagate ES disks as a small test to see if the enterprise drives prove themselves more reliable than the general consumer grade disks.

Since doing this, I’ve had a few disks fail, including one enterprise grade disk. The only noticeable difference I’ve found is that the enterprise disk died much more cleanly, failing completely, whereas the consumer disks lingered on a bit longer messing things up with weird latency issues, or failed sectors that subsequently re-mapped.

Personally I’ll continue to use consumer grade disks for my systems – I keep a pretty close eye on my system so can manually toss any badly performing consumer disks out of the array and I’m also using Linux MD software RAID which is much more tolerant of sluggish consumer grade disks than a hardware RAID controller. Additionally, Linux software RAID is far easier to manage and just as fast as a budget level hardware RAID controller.

However if working with a business server with a high quality RAID controller with onboard battery-backed memory cache, I would certainly spend the extra few dollars for enterprise grade disks. Not only for the RAID advantages, but also because having the enterprise grade disks fail quick and obviously will make them more cost effective long term by reducing the amount of time that employees spend debugging poorly performing systems.

“OpsDev”

I recently did a talk at one of the regular Fairfax “Brown Bag” lunches about tools used by the operations team and how developers can use these tools to debug some of their systems and issues.

It won’t be anything mind blowing for experienced *nix users, but it will be of interest to less experienced engineers or developers who don’t venture into server land too often.

If you’re interested, my colleague and I are both featured on the YouTube video below – my block starts at 14:00, but my colleague’s talk about R at the start may also be of interest.

Additionally, Fairfax AU has also started blogging and publishing other videos and talk like this, as well as blog posts from other people around the technology business (developers, operations, managers, etc) to try and showcase a bit more about what goes on behind the scenes in our organisation.

You can follow the Fairfax Engineering blog at engineering.fairfaxmedia.com.au or on Twitter at @FairfaxEng.

Exposing name servers with Puppet Facts

Carrying on from the last post, I needed a good reliable way to point my Nginx configuration at a DNS server to use for resolving backends. The issue is that I wanted my Puppet module to be portable across various environments, some which block outbound DNS traffic to external services and others where the networks may be redefined on a frequent basis and maintaining an accurate list of all the name servers would be difficult (eg the cloud).

I could have used dnsmasq to setup a localhost resolver, but when it comes to operational servers, simplicity is key – having yet another daemon that could crash or cause problems is never desirable if there’s a simpler way to solve the issue.

Instead I used Facter (sic), Puppet’s tool for exposing values pulled from the system into variables that can be used in your Puppet manifests or templates. The following custom fact is included in my Puppet module and is run before any configuration is applied to the host running my Nginx configuration:

#!/usr/bin/env ruby
#
# Returns a string with all the IPs of all configured nameservers on
# the server. Useful for including into applications such as Nginx.
#
# I live in mymodulenamehere/lib/facter/nameserver_list.rb
# 

Facter.add("nameserver_list") do
    setcode do
      nameserver = false

      # Find all the nameserver values in /etc/resolv.conf
      File.open("/etc/resolv.conf", "r").each_line do |line|
        if line =~ /^nameserver\s*(\S*)/
          if nameserver
            nameserver = nameserver + " " + $1
          else
            nameserver = $1
          end
        end
      end

      # If we can't get any result (bad host config?) default to a
      # public DNS server that is likely to be reachable.
      unless nameserver
        nameserver = '8.8.8.8'
      end

      nameserver
    end
end

On a system with a typically configured /etc/resolv.conf file such as:

search example.com
nameserver 192.168.0.1
nameserver 10.1.1.1

The fact will expose the nameservers in a space-delineated string such as:

# facter -p | grep 'nameserver_list'
nameserver_list => 192.168.0.1 10.1.1.1

I can then use the Fact inside my Puppet templates for Nginx to configure the resolver:

server {
    ...
    resolver <%= @nameserver_list %>;
    resolver_timeout 1s;
    ...
}

This works pretty well, but there are a couple things to watch out for:

  1. If the Fact fails to execute at all, your configuration will be broken. Having said that, it’s a very simple Fact and there’s not a lot that really could fail (eg no dependencies on other apps/non-standard resources).
  2. Linux hosts resolve DNS using the nameservers specified in the order in /etc/resolv.conf. If one fails, they move on and try the next. However Nginx differs, and just uses the list of provides nameservers in round-robin fashion. This is fine if your nameservers are all equals, but if some are more latent or less reliable than others, it could cause slight delays.
  3. You want to drop the resolver_timeout to 1 second, to ensure a failing nameserver doesn’t hold up re-resolution of DNS for too long. Remember that this re-resolution should only occur when the TTL of the DNS records for the backend has expired, so even if one DNS server is bad, it should have almost no impact to performance for your requests.
  4. Nginx isn’t going to pickup stuff in /etc/hosts using these resolvers. This should be common sense, but thought I better put that out there just-in-case.
  5. This Ruby could be better, but I’m not a dev and hacked it up in 15mins. The regex should probably also be improved to handle some of the more exotic /etc/resolv.confs that I’m sure people manage to write.

Varnish DoS vulnerability

The Varnish developers have recently announced a DoS vulnerability in Varnish (CVE-2013-4484) , if you’re using Varnish in your environment make sure you adjust your configurations to fix the vulnerability if you haven’t already.

In a test of our environment, we found many systems were protected by a default catch-all vcl_error already, but there were certainly systems that suffered. It’s a very easy issue to check for and reproduce:

# telnet failserver1 80
Trying 127.0.0.1...
Connected to failserver1.example.com.
Escape character is '^]'.
GET    
Host: foo
Connection closed by foreign host.

You will see the Varnish child dying in the system logs at the time:

Oct 31 14:11:51 failserver1 varnishd[1711]: Child (1712) died signal=6
Oct 31 14:11:51 failserver1 varnishd[1711]: child (2433) Started
Oct 31 14:11:51 failserver1 varnishd[1711]: Child (2433) said Child starts

Make sure you go and apply the fix now, upstream advise applying a particular configuration change and haven’t released a code fix yet, so distributions are unlikely to be releasing an updated package to fix this for you any time soon.

SPF with SpamAssassin

I’ve been using SpamAssassin for years, it’s a fantastic open source anti-spam tool and plugs easily into *nix operating system mail transport agents such as Sendmail and Postfix.

To stop sender address forgery, where spammers email using my domain to email either myself, or others entities, I configured SPF records for my domain some time ago. The SPF records tell other mail servers which systems are really mine, vs which ones are frauds trying to send spam pretending to be me.

SpamAssassin has a plugin that makes use of these SPF records to score incoming mail – by having strict SPF records for my domain and turning on SpamAssassin’s validation, it ensures that any spam I receive pretending to be from my domain will be blocked, as well as anyone trying to spam under the name of other domains with SPF enabled will also be blocked.

Using SpamAssassin’s scoring offers some protection against false positives – if an organisation missconfigures their mail server so that their SPF record fails, but all the other details in the email are OK, the email may still be delivered, if the content looks like ham, comes from a properly configured server, etc, even if the SPF is incorrect – generally a couple different checks need to fail in order for emails to be blacklisted.

To turn this on, you just need to ensure your SpamAssassin configuration is set to load the SPF plugin:

loadplugin Mail::SpamAssassin::Plugin::SPF

You *also* need the Perl modules Mail::SPF or Mail::SPF::Query installed – without these, SpamAssassin will silently avoid doing SPF validations and you’ll be left wondering why you’re still getting silly spam.

On CentOS/RHEL, these Perl modules are available in EPEL and you can install both with:

yum install perl-Mail-SPF perl-Mail-SPF-Query

To check if SPF validation is taking place, check the mailserver logs or the X-Spam-Status email header for SPF_PASS (or maybe SPF_FAIL!), this proves the module is loaded and running correctly.

X-Spam-Status: No, score=-1.9 required=3.5 tests=AWL,BAYES_00,SPF_PASS,
 T_RP_MATCHES_RCVD autolearn=ham version=3.3.1

Finally sit back and enjoy the quieter, spam-free(ish) inbox :-)

Puppet CRL Time Errors

Puppet is much loved for it’s clear meaningful messages when something goes wrong, made even more delightful when you combine it with the lovely error messages thrown out by OpenSSL.

Warning: SSL_connect returned=1 errno=0 state=SSLv3 read server
certificate B: certificate verify failed: [CRL is not yet valid for
/CN=host.example.com]

This error indicates that the certificate is failing to validate since the clock between the node and the puppet master differs. In my case, the clock on the node was far behind the master due to a VirtualBox clock drift issue.

In this case, it was a simple case of re-syncing the clock to resolve the issue. However if the master had been generating certs with the clock far in the future, I would have needed to re-generate my node certificates entirely as the certs would also be incorrect.

Hard drives can be bad influences on your RAID

RAID is designed to handle the loss of hard disks due to hardware failure and can ensure continual service during such a time. But hard drives are wonderful creatures and instead of dying quickly, they can often prolong their death with bad sectors, slow performance or other nasty issues.

In a RAID array if a disk fails in a clear defined fashion, the RAID array will mark it as failed and move on with it’s life. But if the disk is still functioning at reduced performance, write operations on the array will be slowed down to the speed of the slowest disk, as the write doesn’t return as complete until all disks have completed their operation.

It can be tricky to see gradual performance decreases in I/O performance until it reaches a truly terrible level of performance that it can’t go unnoticed due to impacting services in a clear and obvious fashion.

Thankfully tools like Munin make it much easier to see degrading performance over time. I was recently having I/O performance issues thanks to a failing disk and using Munin was quickly able to see which disk was responsible, as well as seeing the level of impact it was making on my system’s performance.

Got to love that I/O wait time!

Wasting almost 2 cores of CPU due to slow I/O holding up processes.

The CPU usage graph is actually very useful for checking out storage related problems, since it records the time spent with the CPU in an idle state due to waiting for storage to catch up and provide data required for operations.

This alone isn’t indicative of a fault – you could get similar results if you are loading your system with too many I/O intensive tasks and your storage just isn’t fast enough for your needs (Are hard disks ever fast enough?), plus disk encryption always imposed some noticeable amount of I/O wait; but it’s a good first place to look.

All the disks!

All the disks!

The disk latency graph is also extremely valuable and quickly shows the disk responsible. My particular example isn’t idle, since Munin has decided to pickup all my LVM volumes and include them on the graphs which makes it very unreadable.

Looking at the stats it’s easy to see that /dev/sdd is suffering, with an average latency of 288ms and a max peak of 7.06 *seconds*. Marking this disk as failed in the array instantly restored performance and I was then able to replace the disk and rebuild the array, restoring expected performance.

Note that this RAID array is built with consumer grade SATA disks, which are particularly bad for this kind of issue – an enterprise grade SATA disk would have been more likely to fail faster and more definitively, as they are designed primarily for RAID environments where the health of the array is more important than one disk doing everything possible to keep itself going.

In my case I’m using software RAID which makes it easy to see the statistics of each disk, since the controller is acting in a JBOD mode and exposing the disks directly to the OS. Using consumer disks like these could be much more “interesting” with a hardware RAID controller that wouldn’t expose the same amount of information… if using a hardware RAID controller, I’d advise to shell up the cash and use enterprise grade disks designed for RAID arrays or you could have a much more difficult life.