Author Archives: Jethro Carr

Introducing Smokegios

Having a reasonably large personal server environment of at least 10 key production VMs along with many other non-critical, but still important machines, a good monitoring system is key.

I currently use a trio of popular open source applications: Nagios (for service & host alerting), Munin (for resource graphing) and Smokeping (for latency response graphs).

Smokeping and Nagios are particularly popular, it’s rare to find a network or *NIX orientated organization that doesn’t have one or both of these utilities installed.

There are other programs around that offer more “combined” UI experiences, such as Zabbix, OpenNMS and others, but I personally find that having the 3 applications that do each specific task really well, is better than having one maybe not-so-good application. But then again I’m a great believer in the UNIX philosophy. :-)

The downside of having these independent applications is that there’s not a lot of integration between them. Whilst it’s possible to link programs such as Munin & Nagios or Nagios & Smokeping to share some data from the probes & tests they make, there’s no integration of configuration between the components.

This means in order to add a new host to the monitoring, I need to add it to Nagios, then to Munin and then to Smokeping – and to remember to sync any changes across all 3 applications.

So this weekend I decided to write a new program called Smokegios.

TL;DR summary of Smokegios

This little utility checks the Nagios configuration for any changes on a regular cron-controlled basis. If any of the configuration has changed, it will parse the configuration and generate a suitable Smokeping configuration from it using the hostgroup structures and then reload Smokeping.

This allows fully autonomous management of the Smokeping configuration and no more issues about the Smokeping configuration getting neglected when administrators make changes to Nagios. :-D

Currently it’s quite a simplistic application in that it only handles ICMP ping tests for hosts, however I’m intended to expand in future with support for reading service & service group information for services such as DNS, HTTP, SMTP, LDAP and more to generate service latency graphs.

This is a brand new application, I’ve run a number of tests against my Nagios & Smokeping packages, but always possible your environment will have some way to break it – if you find any issues, please let me know, keen to make this a useful tool for others.

To get started with Smokegios, visit the project page for all the details including installation instructions and links to the RPM repos.

If you’re using RHEL 5/6/derivatives, I have RPM pages for Smokegios as well as Smokeping 2.4 and 2.6 series on amberdms-custom and amberdms-os repositories.

It’s written in Perl5, not my most favorite language, but it’s certainly well suited for this configuration file manipulation type tasks and there was a handy Nagios-Object module courtesy of Duncan Ferguson that saved writing a Nagios parser.

Let me know if you find it useful! :-)

Google Search & Control

I’ve been using Google for search for years, however it’s the first time I’ve ever come across a DMCA takedown notice included in the results.

Possibly not helped by the fact that Google is so good at finding what I want, that I don’t tend to scroll down more than the first few entries 99% of the time, so it’s easy to miss things at the bottom of the page.

Lawyers, fuck yeah!

Turns out that Google has been doing this since around 2002 and there’s a form process you can follow with Google to file a notice to request a search result removal.

Sadly I suspect that we are going to see more and more situations like this as governments introduce tighter internet censorship laws and key internet gatekeepers like Google are going to follow along with whatever they get told to do.

Whilst people working at Google may truly subscribe to the “Don’t be evil” slogan, the fundamental fact is that Google is a US-based company that is legally required to do what’s best for the shareholders – and the best thing for the shareholders is to not try and fight the government over legalization, but to implement as needed and keep selling advertising.

In response to concerns about Google over privacy, I’ve seen a number of people to shift to new options, such as the increasingly popular and open-source friendly Duck Duck Go search engine, or even Microsoft’s Bing which isn’t too bad at getting decent results with a UI looking much more like early Google.

However these alternatives all suffer from the same fundamental problem – they’re centralized gatekeepers who can be censored or controlled – and then there’s the fact that a centralised entity can track so much about your online browsing. Replacing Google with another company will just leave us in the same position in 10 years time.

Lately I’ve been seeking to remove all the centralized providers from my online life, moving to self-run and federated services – basic stuff like running my own email, instant messaging (XMPP), but also more complex “cloud” services being delivered by federated or self-run servers for tasks such as browser syncing, avatar handling, contacts sync, avoiding URL shortners and quitting or replacing social networks.

The next big one of the list is finding an open source and federated search solution – I’m currently running tests with a search engine called YaCy, which is a peer-to-peer decentralised search engine that is made up of thousands of independent servers, sharing information between themselves.

To use YaCy, you download and run your own server, set it’s search indexing behavior and let it run and share results with other servers (it’s also possible to run it in a disconnected mode for indexing your internal private networks).

The YaCy homepage has an excellent write up of their philiosophy and design fundamentals for the application.

It’s still a bit rough, I think the search results could be better – but this is something that having more nodes will certainly help with and the idea is promising – I’m planning to setup a public instance on my server in the near future for adding all my sites to the index and providing a good test of it’s feasibility.

Takapuna Beach Low Tide

As part of my regular exercise routine, I wander along Takapuna beach – the size of the beach will vary a lot depending on whether the tide is in or out, but the amount it varies is quite dramatic.

This is the first time that I’ve lived right next to a beach and it makes you realize how it’s possible for people to get into trouble with walking along beaches and getting trapped when the tide rises.

Low tide showing off the gradual slope of the entire beach

Normally the waves are lapping up against the rocks by the cliff. Will have to time a trip to walk down past the rocks and onto the other beach one day.

Quite weird to be walking along areas that at times I’ve been swimming in… From what I can tell, the beach continues on a long way at this decline, there were a few swimmers out even further during low tide, so it certainly continues on like that for some way.

Mozilla Firefox “Pin as App”

In a moment of madness, I decided to RTFM the latest Mozilla Firefox Feature List and came across this nifty ability called “Pin as App”.

nawww baby tabs!

It’s pretty handy, I’m using it to maintain tabs of commonly access websites or web applications that I need many times a day, easy to find since it’s always on the left in the defined order, and much smaller than the full tab size.

Only issue is that you need your remote site/app to have a decent favicon – if they don’t, you’ll just end up with a dashed square placeholder and there’s no way in Firefox to set a custom icon for that pin that I can see.

Incur the Wrath of Linux

Linux is a pretty hardy operating system that will take a lot of abuse, but there are ways to make even a Linux system unhappy and vengeful by messing with available resources.

I’ve managed to trigger all of these at least once, sometimes I even do it a few times before I finally learn, so I’ve decided to sit down and make a list for anyone interested.

 

Disk Space

Issue:

Running out of disk. This is a wonderful way to cause weird faults with services like databases, since processes will block (pause) until there is sufficient disk space available again to allow writes to complete.

This leads to some delightful errors such as websites failing to load since the dynamic pages are waiting on the database, which in return is waiting on disk. Or maybe apache can’t write anymore PHP session files to disk, so no PHP based pages load.

And mail servers love not having disk, thankfully in all the cases I’ve seen, Sendmail & dovecot just halt and retain messages in memory without causing a loss of data. (although a reboot when this is occurring could be interesting).

Resolution:

For production systems I always carefully consider the partition table structure, so that an issue such as out-of-control logging processes or tmp directories can’t impact key services such as databases, by creating separate partitions for their data.

This issue is pretty easy to fix with good monitoring, packages such as Nagios include disk usage checks in the stock versions that can alert at configurable intervals (eg 80% of disk used).

 

Disk Access

Issue:

Don’t unplug a disk whilst Linux is trying to use it. Just don’t. Really. Things get really unhappy and you get to look at nice output from ps aux showing processes blocked for disk.

The typical mistake here is unplugging devices like USB hard drives in the middle of a backup process causing the backup process to halt and typically the kernel will spewing the system logs with warnings about how naughty you’ve been.

Fortunately this is almost always recoverable, the process will eventually timeout/terminate and the storage device will work fine on the next connection, although possibly with some filesystem errors or a corrupt file if halfway through writing to disk.

Resolution:

Don’t be a muppet. Or at least educate users that they probably shouldn’t unplug the backup drive if it’s flashing away busy still.

 

Networked Storage

Issue:

When using networked storage the kernel still considers the block storage to be just as critical as local storage, so if there’s a disruption accessing data on a network file system, processes will again halt until the storage returns.

This can have mixed blessings – in a server environment where the storage should always be accessible, halting can be the best solution since your programs will wait for the storage to return and hopefully there will be no data loss.

However for a mobile environment this can cause problems to hang indefinetly waiting for storage that might not be able to be reconnected.

Resolution:

In this case, the soft option can be used when mounting network shares, which will cause the kernel to return an error to the process using the storage if it becomes unavailable so that the application (hopefully) warns the user and terminates gracefully.

Using a daemon such as autofs to automatically mount and unmount network shares on demand can help reduce this sort of headache.

 

Low Memory

Issue:

Running out of memory. I don’t just mean RAM, but swap space (pagefile for you windows users). When you run out of RAM on almost any OS, it won’t be that happy – Linux handles this situation by killing off processes using the OOM in order to free up memory gain.

This makes sense in theory (out of memory, so let’s kill things that are using it), but the problem is that it doesn’t always kill the ones you want, leading to anything from amusement to unmanageable boxes.

I’ve had some run-ins with the OOM before, killing my ssh daemon on overloaded boxes preventing me from logging into them. :-/

One the other hand, just giving your system many GB of swap space so that it doesn’t run out of memory isn’t a good fix either, swap is terribly slow and your machine will quickly grind to a near-halt.

The performance of using swap is so bad it’s sometimes difficult to even log in to a heavily swapping system.

 

 Resolution:

Buy more RAM. Ideally you shouldn’t be trying to run more than possible on a box – of course it’s possible to get by with swap space, but only to a small degree due to the performance pains.

In a virtual environment, I’m leaning towards running without swap and letting OOM just kill processes on guests if they run out of memory, usually it’s better to take the hit of a process being killed than the more painful slowdown from swap.

And with VMs, if the worst case happens, you can easily reboot and console into the systems, compared to physical hosts where you can’t afford to lose manageability at all costs.

Of course this really depends on your workload and what you’re doing, best solution is monitoring so that you don’t end up in this situation in the first place.

Sometimes it just happens due a once-off process and is difficult to always forsee memory issues.

 

Incorrect Time

Issue:

Having the incorrect time on your server may appear only a nuisance, but it can lead to many other more devious faults.

Any applications which are time-sensitive can experience weird issues, I’ve seen problems such as samba clients being unable to see newer files than the system time and having bind break for any lookups. Clock issues are WEIRD.

Resolution:

We have NTP, it works well. Turn it on and make sure the NTP process is included in your process monitoring list.

 

Authentication Source Outages

Issue:

In larger deployments it’s often common to have a central source of authentication such as LDAP, Kerberos, Radius or even Active Directory.

Linux actually does a remarkable amount of lookups against the configured authentication sources in regular operation. Aside from the need to lookup whenever a user wishes to login, Linux will lookup the user database every time the attributes of a file is viewed (user/group information) which is pretty often.

There’s some level of inbuilt caching, but unless you’re running a proper authentication caching daemon allowing off-line mode, a prolonged outage to the authentication server will make it impossible for users to login, but also break simple queries such as ls as the process will be trying to make user/group information lookups.

Resolution:

There’s a reason why we always have two or more sources for key network services such as DNS and LDAP, take advantage of the redundancy built into the design.

However this doesn’t help if the network is down entirely, in which case the best solution is having the system configured to quickly failover to local authentication or to use the local cache.

Even if failover to a secondary system is working, a lot of the timeout defaults are too high (eg 300 seconds before trying the secondary). Whilst the lookups will still complete eventually, these delays will noticely impact services, so it’s recommended to lookup the authentication methods being used and adjust the timeouts down to a couple seconds tops.

 

This is just a few of simple yet nasty ways to break Linux systems in ways that cause weird application behaviour, but not nessacarily in a form that’s easy to debug.

In most cases, decent monitoring will help you avoid and handle many of these issues better by alerting to low resource situations – if you have nothing currently, Nagios is a good start.

Mozilla Collusion

This week Mozilla released an add-on called Collusion, an experimental extension which shows and graphs how you are being tracked online.

It’s pretty common knowledge how much you get tracked online these days, if you just watch your status bar when loading many popular sites you’ll always see a few brief hits to services such as Google Analytics, but there’s also a lot of tracking down with social networking services and advertisers.

The results are pretty amazing, I took these after turning it on for myself for about 1 day of browsing, every day I check in the graph is even bigger and more amazing.

The web actually starting to look like a web....

As expected, Google is one of the largest trackers around, this will be thanks to the popularity of their Google Analytics service, not to mention all the advertising technology they’ve acquired and built over the years including their acquisition of DoubleClick.

I for one, welcome our new Google overlords and would like to remind them that as a trusted internet celebrity I can be useful for rounding up other sites to work in their code mines.

But even more interesting is the results for social networks. I ran this test whilst logged out of my Twitter account, logged out of LinkedIn and I don’t even have Facebook:

Mark Zuckerberg knows what porn you look at.

Combine 69+ tweets a day & this information and I think Twitter would have a massive trove of data about me on their servers.

Linkedin isn't quite as linked at Facebook or Twitter, but probably has a simular ratio if you consider the userbase size differences.

When you look at this information, you can see why Google+ makes sense for the company to invest in. Google has all the data about your browsing history, but the social networks are one up – they have all your browsing information with the addition of all your daily posts, musings, etc.

With this data advertising can get very, very targeted and it makes sense for Google to want to get in on this to maintain the edge in their business.

It’s yet another reason I’m happy to be off Twitter now, so much less information that can be used by advertisers for me. It’s not that I’m necessarily against targeted advertising, I’d rather see ads for computer parts than for baby clothes, but I’m not that much of a fan of my privacy being so exposed and organisations like Google having a full list of everything I do and visit and being able to profile me so easily.

What will be interesting will be testing how well the tracking holds up once IPv6 becomes popular. On one hand, IPv6 can expose users more, if they’re connecting with a MAC-based address, but on the other hand, could privatise more using IPv6 address randomisation when assigning systems IP addresses.

Mozilla Sync Server RPMs

A few weeks ago I wrote about the awesomeness that is Mozilla’s Firefox Sync, a built-in feature of Firefox versions 4 & later which allows for synchronization of bookmarks, history, tabs and password information between multiple systems. (historically known as Weave)

I’ve been running this for a few weeks now on my servers using fully packaged components and it’s been working well, excluding a few minor hick-ups.

It’s taken a bit longer than I would have liked, but I now have stable RPM packages for RHEL/CentOS 5 and 6 for both i386 and x86_64 available publicly.

I always package all software I use on my servers (and even my desktops most of the time) as it makes re-building, upgrading and supporting systems far easier in the long run. By having everything in packages and repos, I can rebuild a server entirely simply by knowing what list of packages were originally installed and their configuration files.

Packaging one’s software is also great when upgrading distribution, as you can get a list of all non-vendor programs and libraries installed and then use the .src.rpm files to build new packages for the new OS release.

 

Packaging Headaches

Mozilla Sync Server was much more difficult to package than I would have liked, mostly due  to the documentation clarity and the number of dependencies.

The primary source of pain was that I run CentOS 5 for a lot of my production systems,which ships with Python 2.4, whereas to run Mozilla Sync Server, you will need Python 2.6 or later.

This meant that I had to build RPMs for a large number (upwards of 20 IIRC) python packages to provide python26 versions of existing system packages. Whilst EPEL had a few of the core ones (such as python26 itself), many of the modules I needed either weren’t packaged, or were only had EPEL packages for Python 2.4.

The other major headache was due to unclear information and in some cases, incorrect documentation from Mozilla.

Mozilla uses the project source name of server-full in the setup documentation, however this isn’t actually the entire “full” application – rather it provides the WSGI executable and some libraries, however you also need server-core, server-reg and server-storage plus a number of python modules to build a complete solution.

Sadly this isn’t entirely clear to anyone reading the setup instructions, the only setup information relates to checking out server-full and running a build script which will go through and download all the dependencies (in theory, it often broke for me) and build a working system, complete with paster web server.

Whilst this would be a handy resource for anyone doing development, it’s pretty useless for someone wanting to package a proper system for deployment since you need to break all the dependencies into separate packages.

(Note that whilst Mozilla refer to having RPM packages for the software components, these have been written for their own inhouse deployment and are not totally suitable for stock systems, not to mention even when you have SPEC files for some of the Mozilla components, you still lack the SPEC files for dependencies.)

To top it off, some information is just flat out wrong and can only be found out by first subscribing to the developer mailing list – in order to gain a login to browse the list archives – so that you can ind such gems as “LDAP doesn’t work and don’t try as it’s being re-written”.

Toss in finding a few bugs that got fixed right around the time I was working on packaging these apps and you can understand if I’m not filled with love for the developers right this moment.

Of course, this is a particularly common open source problem – the team clearly released in a way that made sense to them, and of course everyone would know the difference between server-core/full/reg/storage, etc right?? ;-) I know I’m sometimes guilty of the same thing.

Having said that, the documentation does appear to be getting better and the community is starting to contribute more good documentation resources. I also found a number of people on the mailing list quite helpful and the Mozilla Sync team were really fast and responsive when I opened a bug report, even when it’s a “stupid jethro didn’t hg pull the latest release before testing” issue.

 

Getting My Packages

All the new packages can be found in the Amberdms public package repositories, the instructions on setting up the CentOS 5 or CentOS 6 repos can be found here.

 

RHEL/CentOS 5 Repo Instructions

If you are running RHEL/CentOS 5, you only need to enable amberdms-os, since all the packages will install in parallel to the distribution packages. Nothing in this repo should ever clash with packages released by RedHat, but may clash/be newer than dag or EPEL packages.

 

RHEL/CentOS 6 Repo Instructions

If you are running RHEL/CentOS6, you will need to enable both amberdms-os and amberdms-updates, as some of the python packages that are required are shipped by RHEL, but are too outdated to be used for Mozilla Sync Server.

Note that amberdms-updates may contain newer versions of other packages, so take care when enabling it, as I will have other unrelated RPMs in there. If you only want my newer python packages for mozilla sync, set includepkgs=python-* for amberdms-updates

Also whilst I have tested these packages for Mozilla Sync Server’s requirements, I can’t be sure of their suitability with existing Python applications on your server, so take care when installing these as there’s always a chance they could break something.

 

RHEL/CentOS 5 & 6 Installation Instructions

Prerequisites:

  1. Configured Amberdms Repositories as per above instructions.
  2. Working & configured Apache/httpd server. The packaged programs will work with other web servers, but you’ll have to write your own configuration files for them.

Installation Steps:

  1. Install packages with:
    yum install mozilla-sync-server
  2. Adjust Apache configuration to allow access from desired networks (standard apache IP rules).
    /etc/httpd/conf.d/mozilla-sync-server.conf
  3. Adjust Mozilla Sync Server configuration. If you want to run with the standard sqllite DB (good for initial testing), all you must adjust is line 44 to set the fallback_node value to the correct reachable URL for Firefox clients.
    vi /etc/mozilla-sync-server/mozilla-sync-server.conf
  4. Restart Apache – due to the way mozilla-sync-server uses WSGI, if you make a change to the configuration, there might still be a running process using the existing config. Doing a restart of Apache will always fix this.
    /etc/init.d/httpd restart
  5. Test that you can reach the sync server location and see if anything breaks. These tests will fail if something is wrong such as missing modules or inability to access the database.
    http://host.example.com/mozilla-sync/
    ^ should return 404 if working - anything else indicated error
    
    http://host.example.com/mozilla-sync/user/1.0/a/
    ^ should return 200 with the page output of only 0
  6. There is also a heartbeat page that can be useful when doing automated checks of the service health, although I found it possible to sometimes break the server in ways that would stop sync for Firefox, but still show OK for heartbeat.
    http://host.example.com/mozilla-sync/__heartbeat__
  7. If you experience any issues with the test URLs, check /var/log/httpd/*error_log*. You may experience problems if you’re using https:// with self-signed certificates that aren’t installed in the browser as trusted too, so import your certs properly so they’re trusted.
  8. Mozilla Sync Server is now ready for you to start using with Firefox clients. My recommendation is to use a clean profile you can delete and re-create for testing purposes and only add sync with your actual profile once you’ve confirmed the server is working.

 

Using MySQL instead of SQLite:

I tend to standardise on using MySQL where possible for all my web service applications since I have better and more robust monitoring and backup tools for MySQL databases.

If you want to setup Mozilla Sync Server to use MySQL, it’s best to get it working with SQLite first and then try with MySQL to ensure you don’t have any issues with the basic setup before doing more complex bits.

  1. Obviously the first step should be to setup MySQL server, if you haven’t done this yet, the following command will set it up and take you through a secure setup process to password protect the root DB accounts:
    yum install -y mysql-server
    /etc/init.d/mysqld start
    chkconfig --level 345 mysqld on
    /usr/bin/mysql_secure_installation
  2. Once the MySQL server is running, you’ll need to create a database and user for Mozilla Sync Server to use – this can be done with:
    mysql -u root -p
    # or without -p if no MySQLroot password
    CREATE DATABASE mozilla_sync;
    GRANT ALL PRIVILEGES ON mozilla_sync.* TO mozilla_sync@localhost IDENTIFIED BY  'examplepassword';
    flush privileges;
    \q
  3. Copy the [storage] and [auth] sections from /etc/mozilla-sync-server/sample-configs/mysql.conf to replace the same sections in /etc/mozilla-sync-server/mozilla-sync-server.conf. The syntax for the sqluri line is:
    sqluri = mysql://mozilla_sync:examplepassword@localhost:3306/mozilla_sync
  4. Restart Apache (very important, failing todo so will not apply config changes):
    /etc/init.d/httpd restart
  5. Complete! Test from a Firefox client and check table structure is created with SHOW TABLES; MySQL query to confirm successful configuration.

 

Other Databases

I haven’t done any packaging or testing for it, but Mozilla Sync Server also supports memcached as a storage database, there is a sample configuration file supplied with the RPMs I’ve built, but you may need to also built some python26 modules to support it.

 

Other Platforms?

If you want to package for another platform, the best/most accurate resource on configuring the sync server currently is one by Fabian Wenk about running it on FreeBSD.

I haven’t seen any guides to packaging the application, the TL;DR version is that you’ll essentially need server-full, server-core, server-reg and server-storage, plus all the other python-module dependencies – take a look at the RPM specfiles to get a good idea.

I’ll hopefully do some Debian packages in the near future too, will have to work on improving my deb packaging foo.

 

Warnings, issues, small print, etc.

These packages are still quite beta, they’ve only been tested by me so far and there’s possibly some things in them that are wrong.

I want to go through and clean up some of the Python module RPMs at some stage as I don’t think the SPEC files I have are as portable as they should be, commits back always welcome. ;-)

If you find these packages useful, please let me know in comments or emails, always good to get an idea how people find this stuff and whether it’s worth the late nighters. ;-)

And if you have any problems, feel free to email me or comment on this page and I’ll help out the best I can – I suspect I’ll have to write a Mozilla Sync Server troubleshooting guide at some stage sooner or later.

IBM x3500 M3 Server

I recently got to play with a nice shiny new IBM x3500 M3 server ordered for a customer to replace a previous IBM x3400 M2 that had become a bit too acquainted with a sprinkler system….

These machines offer a good mix of features that makes them suitable for small and medium businesses, with the option for both SAS and SATA drive, dual CPU sockets and up to 192GB RAM in a (large) tower format.

Whilst not for everyone, I love the IBM xseries industrial design.

The only issue is that they sometimes miss certain handy features that competitors like Dell are shipping in their machines – one such feature being ESATA, which I find really handy for small business customers doing backups onto external hard disks.

With the x3500 M3 the server ships with UEFI instead of a legacy BIOS, sadly it doesn’t seem to speed up the server boot time but hopefully as they start to build a better design around UEFI this issue will improve in future releases.

I still have high hopes for what they could accomplish with UEFI, but so far it seems to be mostly a system for booting a BIOS-like mode so I’m not sure what has actually been accomplished other than to add more layers worthy of Inception.

As standard these machines ship with a single power supply, for redundancy you will probably want to order the Redundant Cooling & Power kit to get a second supply, along with several more fans you don’t really want or need.

(Tip: On older models, if you dislodged any fans by accident, the server will think there’s been a fan failure and will run all the other fans at maximum speed which is incredibility loud. In normal operation, it should be reasonably quiet with the fans speed dynamically slowing.)

Enough fans for a small hurricane.

IBM is moving towards 2.5″ drives being the size of choice, so take care when ordering disks to suit. In the case of the model we purchased, it shipped with 8x 2.5″ SATA/SAS bays as well as a big general bay area and mounts of older existing 3.5″ disks.

I presume this large bay is where additional 2.5″ bays could also be installed if you have particularly large storage requirements.

I do love the tiny new 2.5" drives, pitythey can't reduce the size of the rest of the server to suit....

Most likely you’ll be ordering the machine with additional memory to install, take note that these servers (like many of IBMs) are particularly explicit about which slot there memory modules must be installed into.

And if you’re ordering a lot of RAM take a careful read of the product manual – what I see with the memory installation instructions hints that certain DIMM slots are only usable with a second CPU.

Memory installation instructions are on the side panel/lid.

The best part of the x3500 M3 is that it ships with an IBM Integrated Management Module as a standard feature. This allows full management of the server including viewing the screen all the way from power on, through UEFI/BIOS and to the OS remotely via a web brower, eliminating any need for a network connected KVM.

This is particularly great for us, since a customer who is ordering a tower server typically only has a couple machines at the most and isn’t going to want to invest extra money for remote access – having it as a standard feature makes our lives a bit easier without costing extra.

Kernel paniced your box? No worries, a reboot is just a click away!

I was also happy to find that instead of some nasty flash plugin or windows-only application, the IMM browser interface works fine on my Linux machine and even the Java-based KVM functionality works fine under Linux and OpenJDK.

Don't mess with those BIOS settings in that tiny server room, do it from the pub! (or maybe don't, alcohol and BIOS settings sounds like a recipie for disaster....)

The one problem I did have with the IMM is that they made the process of the first login a bit harder than needed, with some obscure default admin user/password details, but then allowing the user to continue to use these insecure credentials for ongoing maintenance of the server.

Naturally you’ll want to change the passwords of the IMM because having randoms login and reboot your server isn’t exactly desirable… You should also setup and force HTTPS as well, to ensure there aren’t any insecure connections established sending keystrokes without encryption.

 

I think the IBM x3500 M3 series servers certainly have room to improve – they’re physically overly large, UEFI still boots slowly, the H/W RAID configuration interface needs a lot to be desired for and a lack of a built-in ESATA port is very annoying.

But when it comes to the manageability and expandability of the platform, they hold their own and for businesses with a single primary server I think they’re a great option without needing massive investment in management infrastructure.

Rangitoto Island Adventures

Due to excessive homesickness for Wellington’s hills lately, I decided that it would be nice to visit the next best thing and go climb the local volcano –  Rangitoto Island.

Approaching Rangitoto from Auckland CBD via ferry

Rangitoto makes up part of the Auckland Volcanic Field, erupting less than 600 years ago. and is clearly visible from Takapuna Beach, Mission Bay and various other locations.

This field is now considered dormant, but based on the size of Rangitoto, if any of the volcano in this field ever became active again, I’d be getting out of here as fast as I possibly can. (Although all Aucklanders would perish stuck in traffic trying to get out, so I’d probably use the precious moments left to hug my Linux server and tell it how much it means to me instead.)

Rangitoto is a particular interesting trip, not just because it’s a big volcano slapped alongside NZ’s largest city, but also for it’s impressive view, many walking trails, interesting human history (Maori, WW2, 20th century) and the fact that you can get extra walks and value by also visiting the neighboring island Motautapu which is connected and walkable.

I did only a day trip, but I’m seriously considering a several-days trip out to walk more of the island trails and to camp overnight in the designated camping grounds on Motautapu.

To get there, Fullers run a regular ferry service from Britomart & Devonport out to Rangitoto with several trips a day for $27 return, or $20 if you book online.

There’s a great map and guide which Fullers provides as a download or at the ticket office, if you’re planning a visit I recommend you grab it, just don’t trust the timetable 100% without checking the exact trip times for the particular day you’re visiting, as if you get it wrong, there’s no overnight accommodation and I think a private water taxi trip back to the CBD would be a bit pricey….

Sitting on the ferry, waiting to depart. Little dubious about the weather.

Cruising out of Auckland CBD, note the harbour bridge in the distance and the North Shore.

Taking the ferry from Britomart rather than Devonport offers some extra additional views of Auckland’s waterfront, including the cargo port which I didn’t manage to get pictures of sadly.

As a tip, even if you live on the North Shore, it’s often easier to bus into Auckland CBD and go from Britomart than it is from Devonport which remarkably always managed to have ridiculous amounts of suburban traffic congestion – I departed from Britomart and returned via Devonport, with the latter taking a good 30mins+ more to get home due to nose-to-tail traffic all the way to Takapuna!

Auckland CBD and cargo port.

Devonport Naval Base with HMNZS Canterbury in port.

Getting a good view of the Devonport Naval Base is pretty neat and offers something that you won’t see around Wellington’s harbor quite so much – when I went past, the HMNZS Canterbury was in port, sadly not out sinking whalers.

Pulling into Devonport Wharf

The William C Daldy Steam Tug at Devonport.

The trip to Rangitoto from Britomart takes around 25-45 mins depending on stopover time at Devenport to load/unload passengers.

Coming into the wharf at Rangitoto

Looking out from Rangitoto towards Auckland.

Rocks, Bush, Sea with a tint of human impact - you'll get a lot of this here.

Once on Rangitoto, the most noticeable trait is the rocks. The entire island is basically one giant pile of jagged rocks (after all, it was formed by a volcano) with plants growing whether they manage to take root. Often there’s weird patches of just rock with a single plant that has managed to grow in the middle.

The rocks themselves vary from being quite porous, to denser formations formed by lava and darker rock where lava came to hit the water. If there’s any geologists reading this blog, I’m sure they can comment far more accurately than I ever could about the different formations of rock.

A Geologist's Dreamland?

Pourous volcanic rocks litter the island, interestingly I didn't come across any pumice though.

Love the red soil, it's like being in aussie! ;-)

Dense lava flows - this track leads to the lava caves formed by flowing lava leaving a crust/shell which becomes a cave.

After disembarking the ferry I took the most direct path which pretty much climbs steadily up the mountain until reaching the top lookout. It started off pretty smoothly, but quickly became steeper and had me cursing my fitness, the temperature and the fact that I had to keep pushing as I didn’t want elderly ladies to beat me up.

It is possible to do Rangitoto by road-train tour (read trailers pulled by a tractor) which seemed popular with a number of tourists, families and elderly, but if you’re young/slightly fit, you’ll miss the whole point and many of the better paths on the island by taking it.

The other popular way to get around seems to be jogging – I’m not into running myself, but even I was starting to enjoy leaping from rock to rock towards the end of my day and it’s certainly a bit of a nicer spot for a ran than some random Auckland city road.

Once at the top, the view is pretty amazing and makes all the pain getting up the hill worthwhile.

Crater at the top of the island, just incase you forgot you were ontop of a giant exploding mountain.

Bow before Jethro the volcano conqueror, puny Aucklanders! (Looking out at CBD and the North Shore/Devonport)

Looking North-West out over the neighboring Motutapu Island

With a view like that, I had to give the new Android/ICS/4.0 panorama feature a go, but even this doesn’t do it justice. (look ma! I’m like a real professional-photo-taking-person!) ;-)

Panorama of Auckland looking south from the top of Rangitoto.

Looking over the North Shore region and a good view of the island below.

There’s a few things to look at up on the summit itself – it’s the home of an old WW2 observation post and there are a couple other ruins around as well if you do the crater loop walk track.

Does anyone actually know what these are? Summit markers?

I pity the poor suckers who had to lug this cement all the way up to build these bunkers....

The paths around the island vary a lot. There’s the typical standard dirt walking tracks, but you’ll sometimes have nice solid wooden walkways or wide rocky roads. Yet at other times, your path will be barely determinable piles of difficult to get across rocks.

What’s also extremely variable is how much the paths vary from being wide open places to being tight bush tracks, you can quickly go from one extreme to another.

I dare say my good sir, this path looks quite civilized, let us wander along and discuss our plans for high tea.

Wide open spaces - DOC workers use some of these roads with utes, if you're lucky a cute one rolls down the window and smiles at you instead of running you down. :-)

I call this path "The Ankle Breaker"

If you head down from the summit towards Islington Bay, you will have the opportunity to take the optional lava caves path (not that great unless you want to actually go into some caves) and also reach the causeway linking Rangitoto Island with the older and now meadow-covered Motutapu Island.

Causeway linking Rangitoto with Motutapu. Despite all their faults, the Americans did some pretty handy road building whilst in NZ during WW2.

The change in scenary between Rangitoto and Motutapu is startling, explained by the fact that Motutapu was here long before Rangitoto appeared and has no geographical links otherwise.

Looking out between the two islands.

Sadly I didn’t have time to get over to Motutapu Island, I arrived on the 09:15 ferry and departed on the 15:45 (last one of the day is 17:00) and I pretty much spent the entire time on the move.

Motutapu island has other WW2 sites, beaches and another main bay with a camping site that I would have liked to check out, but it would have been a 3 hour return trip to get there and back and I didn’t fancy gambling with the last ferry of the day home.

One thing to note about Rangitoto (and Motutapu for that matter) is that the timings on the Fuller’s map for the walkways are not to be ignored – I’m a damn fast walker, but I wasn’t able to do much more than 15% less than stated on the map at best of times. If it says 2 hours, it’s going to take 2 hours, don’t try to rush them.

Islington Bay by the Motutapu causeway is worth a visit, although serenity is a bit ruined when you have a party load of drinkers playing music in the bay – it’s a popular and accessible area for anyone with a boat.

There’s also the nearby Yankee Bay which has more ramps and could be a bit easier if you’re bringing a small dingy ashore.

Batches and boat ramps at Islington Bay. There are restored batches around the island.

Calmer, quieter waters.

Ruins of buildings long gone.

After visiting the bay, I took the coastal walk back to Rangitoto Wharf  which is about 2 hours and far more rocky than I realized.

I evidently wasn’t the only one, some poor dude had decided to take the walk carrying a kayak, an airport-style luggage bag on wheels and several camping bags of supplies via this path rather than the much easier road that would have been 30mins shorter and far, far easier to shift everything on. He would have earned his sleep after arriving at camp that night!

Rocky coastal path - and a random power pole?!?!

The coastal path is mostly bush walking with the occasional open space and scenic sea view. I mostly took it since I wanted to see the mine depot, where sea mines were stored and deployed to protect Auckland Harbour during WW2, but sadly it had a closed trail so I was unable to visit or even get close to get a view of it. :-(

After the whole trip, I was pretty exhausted. Sadly I didn’t get a GPS map of my walking activities as I needed to conserve phone battery, but it would have been a good number of kilometers!

Your sexy, rugged, and always modest adventurer strutting the Intel Linux propaganda to all the outdoors fitness fans. :-P

It was good having 30mins or so after the walk to just sit and relax waiting for the ferry.

There is cellphone and functional data coverage from parts of Rangitoto – essentially any parts with line-of-sight to Auckland city – if you are addicted to Twitter, Facebook or any of these other hip social media 2.0 things you kids today love. :-P

My ride home after a long day <3

Disembarked at Devonport just as the rain starts.

If I managed to lose this during my walk, would they just leave me stranded on the island?

Overall it was a great tip with some amazing sights and walks and I’d certainly do it again at some point. Still many parts of the island I have yet to explore, bays with lighthouses, wrecks, quarries and of course everything on Motutapu to see and do.

I was fortunate in that I had an overcast day that, whilst almost at points, didn’t quite manage to rain, leaving me dry yet not too hot. I would avoid going on a blazing sunny day – when you get walking up hill or on the bush tracks it gets hot fast and the lava/rocks just love to reflect that heat back at you….

Take plenty of sunscreen (I’ve learned this the hard way), sunglasses, food and water. There is no fresh water on the island, I took and consumed around 2.4 liters of water during the 6 hours I was on the island (that’s 4 typical water bottles) and wouldn’t recommend any less for an adult.

You also want some kind of jacket as it can get cold when exposed and if it gets windy – most noticeable waiting for the ferry on the wharf, where several girls in very skimpy clothing shivered quite noticeably. And if it rains, there’s not always much shelter, so be prepared to get wet.

As always, NZ conditions can change quickly and with the length and remoteness of the trails on this island compared to inner city walks, you don’t want to be caught short.

Magical Bank Gnomes

I just couldn’t help myself…. I think it’s the whole commanding “do not reply to this message” that I just can’t ignore.

Maybe sometime in the future an administrator will be cleaning out the SMS spool folder and find it in the pile of idiotic customers who had to reply to the automated feed. :-)