Author Archives: Jethro Carr

Munin Performance

Munin is a popular open source network resource monitoring tool which polls the hosts on your network for statistics for various services, resources and other attributes.

A typical deployment will see Munin being used to monitor CPU usage, memory usage, amount of traffic across network interface, I/O statistics and more – it’s very handy for seeing long term performance trends and for checking the impact that upgrades or adjustments to the environment have made.

Whilst having some overlap with Nagios, Munin isn’t really a replacement, more an addition – I use Nagios to do critical service and resource monitoring and use Munin to graph things in more detail – something that Nagios doesn’t natively do.

A typical Munin graph - Munin provides daily, weekly, monthly and yearly graphs (RRD powered)

Rather than running as a daemon, the Munin master runs a cronjob every 5minutes that calls a sequence of scripts to poll the configured servers and generate new graphs.

  1. munin-update to poll configured hosts for new statistics and store the information in RRD databases.
  2. munin-limits to highlight perceived issues in the web interface and optionally to a file for Nagios integration.
  3. munin-graph to generate all the graphs for all the services and hosts.
  4. munin-html to generate the html files for the web interface (which is purely static).

The problem with this model, is that it doesn’t scale particularly well – once you start getting a substantial number of servers, the step-by-step approach can start to run out of resources and time to complete within the 5minute cron period.

For example, the following are the results for the 3 key scripts that run on my (virtualised) Munin VM monitoring 18 hosts:

sh-3.2$ time /usr/share/munin/munin-update
real    3m22.187s
user    0m5.098s
sys     0m0.712s

sh-3.2$ time /usr/share/munin/munin-graph
real    2m5.349s
user    1m27.713s
sys     0m9.388s

sh-3.2$ time /usr/share/munin/munin-html
real    0m36.931s
user    0m11.541s
sys     0m0.679s

It’s a total of around 6 minutes time to run – long enough that the finishing job is going to start clashing with the currently running job.

So why so long?

Firstly, munin-update – munin-update’s time is mostly spent polling the munin-node daemon running on all the monitored systems and then a small amount of I/O time writing the new information to the on-disk RRD files.

The developers have appeared to realise the issue of scale with munin-update and have the ability to run it in a forked mode – however this broke horribly for me with a highly virtualised environment, since sending a poll to 12+ servers all running on the one physical host would cause a sudden load spike and lead to a service poll timeout, with no values being returned at all. :-(

This occurs because by default Munin allows a maximum of 5 seconds for each service query to complete across all hosts and queries all the hosts and services rapidly, ignoring any that fail to respond fast enough. And when querying a large number of servers on one physical host, the server would be too loaded to respond quickly enough.

I ended up boosting the timeouts on some servers to 60 seconds (particular the KVM hosts themselves, as there would sometimes be 60+ LVM volumes that Munin wanted statistics for), but it still wasn’t a good solution and the load spikes would continue.

There are some tweaks that can be used, such as adjusting the max number of forked processes, but it ended up being more reliable and easier to support to just run a single thread and make sure it completed as fast as possible – and taking 3 mins to poll all 18 servers and save to the RRD database is pretty reasonable, particular for a staggered polling session.

 

After getting munin-update to complete in a reasonable timeframe, I took a look into munin-html and munin-graph – both these processes involve reading the RRD databases off the disk and then writing HTML and RRDTool Graphs (PNG files) to disk for the web interface.

Both processes have the same issue – they chew a solid amount of CPU whilst processing data and then they would get stuck waiting for the disk I/O to catch up when writing the graphs.

The I/O on this server isn’t the fastest at the best of times, considering it’s an AES-256 encrypted RAID 6 volume and the time taken to write around 200MB of changed data each time was a bit too much to do efficiently.

Munin offers some options, including on-demand graph generation using CGIs, however I found this just made the web interface unbearably slow to use – although from chats with the developer, it sounds like version 2.0 will resolve many of these issues.

I needed to fix the performance with the current batch generation model. Just watching the processes in top quickly shows the issue with the scripts, particular with munin-graph which runs 4 concurrent processes, all of them waiting for I/O. (Linux process crash course: S is sleeping (idle), R is running, D is performing I/O operations – or waiting for them).

Clearly this isn’t ideal – I can’t do much about the underlying performance, other than considering putting the monitoring VM onto a different I/O device without encryption, however I then lose all the advantages of having everything on one big LVM pool.

I do however, have plenty of CPU and RAM (Quad Phenom, 16GB RAM) so I decided to boost the VM from 256MB to 1024MB RAM and setup a tmpfs filesystem, which is a in-memory filesystem.

Munin has two main data sources – the RRD databases and the HTML & graph outputs:

# du -hs /var/www/html/munin/
227M    /var/www/html/munin/

# du -hs /var/lib/munin/
427M    /var/lib/munin/

I decided that putting the RRD databases in /var/lib/munin/ into tmpfs would be a waste of RAM – remember that munin-update is running single-threaded and waiting for results from network polls, meaning that I/O writes are going to be spread out and not particularly intensive.

The other problem with putting the RRD databases into tmpfs, is that a server crash/power down would lose all the data and that then requires some regular processes to copy it to a safe place, etc, etc – not ideal.

However the HTML & graphs are generated fresh each time, so a loss of their data isn’t an issue. I setup a tmpfs filesystem for it in /etc/fstab with plenty of space:

tmpfs  /var/www/html/munin   tmpfs   rw,mode=755,uid=munin,gid=munin,size=300M   0 0

And ran some performance tests:

sh-3.2$ time /usr/share/munin/munin-graph 
real    1m37.054s
user    2m49.268s
sys     0m11.307s

sh-3.2$ time /usr/share/munin/munin-html 
real    0m11.843s
user    0m10.902s
sys     0m0.288s

That’s a decrease from 161 seconds (2.68mins) to 108 seconds (1.8 mins). It’s a reasonable increase, but the real difference is the massive reduction in load for the server.

For a start, we can see from watching the processes with top that the processor gets worked a bit more to complete the process, since there’s not as much waiting for I/O:

With the change, munin-graph spends almost all it’s time doing CPU processing, rather than creating I/O load – although there’s the occasional period of I/O as above, I suspect from the time spent reading the RRD databases off the slower disk.

Increased bursts of CPU activity is fine – it actually works out to less CPU load, since there’s no need for the CPU to be doing disk encryption and hammering 1 core for a short period of time is fine, there’s plenty of other cores and Linux handles scheduling for resources pretty well.

We can really see the difference with Munin’s own graphs for the monitoring VM after making the change:

In addition, the host server’s load average has dropped significantly, as well as the load time for the web interface on the server being insanely fast, no more waiting for my browser to finish pulling all the graphs down for a page, instead it loads in a flash. Munin itself gives you an idea of the difference:

If performance continues to be a problem, there are some other options such as moving RRD databases into memory, patching Munin to do virtualisation-friendly threading for munin-update or looking at better ways to fix CGI on-demand graphing – the tmpfs changes would help a bit to start with.

find-debuginfo.sh invalid predicate

I do a lot of packaging for RHEL/CentOS 5 hosts, often this packaging is backporting of newer software versions, typically I’ll pull Fedora’s latest package and make various adjustments to it for RHEL 5’s older environment – typically things like package name changes, downgrade from systemd to init and correcting any missing build dependencies.

Today I came across this rather unhelpful error message:

+ /usr/lib/rpm/find-debuginfo.sh /usr/src/redhat/BUILD/my-package-1.2.3
find: invalid predicate `'

This error is due to the newer Fedora spec files often not explicitly setting the value of BuildRoot which then leaves the package to install into the default location, which isn’t always defined on RHEL 5 hosts.

The correct fix is to define the build root in the spec file with:

BuildRoot: %{_tmppath}/%{name}-%{version}-%{release}-root-%(%{__id_u} -n)

This will set both %{buildroot} and $RPM_BUILD_ROOT, so no matter whether you’re using either syntax, the files will be installed into the right place.

However, this error is a symptom of a bigger issue – without defining BuildRoot, the package will still compile and complete make install, however instead of the installed files going into /var/tmp/packagename…etc, the files will be installed directly into the actual / filesystem, which is generally ReallyBad(tm)

Now if you were building the package as a non-privileged user, this would have failed at the install phase and you would not have gotten as far as the invalid predicate error.

But if you were naughty and building as the root user, the package would have installed into / without complaint and clobbered any existing files installed on the build host. And the first sign of something being wrong is the invalid predicate error when the find debug script gets provided with no files.

This is the key reason why you are highly recommended to build all packages as a non-privileged user, so that if the build incorrectly tries to install anything into /, the install will be denied and the user will quickly realize things aren’t installing into the right place.

Building as root can be even worse than just “whoops, I overwrote the installed version of mypackage whilst building a new one” or “blagh annoying invalid predicate error” – consider the following specfile line:

rm -rf $RPM_BUILD_ROOT/%{_includedir}

On a properly defined package, this would execute:

rm -rf /var/tmp/packagename/usr/include/

But on a package lacking a BuildRoot definition it becomes:

rm -rf /usr/include/

Yikes! Not exactly what you want – of course, running as a non-root user would save you, since that rm command would be refused and you’d quickly figure out the issue.

I will leave it as an exercise of the reader to determine why I commented about this specific example… ;-)

IMHO, rpmbuild should be patched to just outright refuse to compile packages as the root user so this mistake can’t happen, it seems silly to allow a bad packaging habit to be used when the damages are so severe.

acpid trickiness

Ran into an issue last night with one of my KVM VMs not registering a shutdown command from the host server.

This typically happens because the guest isn’t listening (or is configured to ignore) ACPI power “button” presses, so the guest doesn’t get told that it should shutdown.

In the case of my CentOS (RHEL) 5 VM, the acpid daemon wasn’t installed/running so the ACPI events were being ignored and the VM would just stay running. :-(

To install, start and configure to run at boot:

# yum install -y acpid
# /etc/init.d/acpid start
# chkconfig --level 345 acpid on

If acpid wasn’t originally running, it appears that HAL daemon can grab control of the /proc/acpi/event file and you may end up with the following error upon starting acpid:

Starting acpi daemon: acpid: can't open /proc/acpi/event: Device or resource bus

The reason can quickly be established with a ps aux:

[root@basestar ~]# ps aux | grep acpi
root        17  0.0  0.0      0     0 ?        S<   03:16   0:00 [kacpid]
68        2121  0.0  0.3   2108   812 ?        S    03:18   0:00 hald-addon-acpi: listening on acpi kernel interface /proc/acpi/event
root      3916  0.0  0.2   5136   704 pts/0    S+   03:24   0:00 grep acpi

Turns out HAL grabs the proc file for itself if acpid isn’t running, but if acpid is running, it will talk to acpid to get it’s information. This would self-correct on a reboot, but we can just do:

# /etc/init.d/haldaemon stop
# /etc/init.d/acpid start
# /etc/init.d/haldaemon start

And sorted:

[root@basestar ~]# ps aux | grep acpi
root        17  0.0  0.0      0     0 ?        S<   03:16   0:00 [kacpid]
root      3985  0.0  0.2   1760   544 ?        Ss   03:24   0:00 /usr/sbin/acpid
68        4014  0.0  0.3   2108   808 ?        S    03:24   0:00 hald-addon-acpi: listening on acpid socket /var/run/acpid.socket
root     16500  0.0  0.2   5136   704 pts/0    S+   13:24   0:00 grep acpi

 

A tale of two route controllers

Ever since I built a Linux 3.2.0 kernel for my Debian Stable laptop to take advantage of some of the newer kernel features, I have been experiencing occasional short periods of disconnect/reconnect on the Wi-Fi network.

This wasn’t happening heaps (maybe a couple times a day), but it was starting to get annoying, so I decided to sort it out properly and do a kernel driver and microcode update for my Intel Centrino Wireless-N 1000 card.

The firmware/microcode update was easy enough, simply a case of downloading the latest code from Intel and installing into /lib/firmware/ – the kernel driver does the rest, finding it and loading it into the Wi-Fi card at boot time.

Next step was building a new kernel for my machine, I went through and tuned the module selection very carefully tossing out all the hardware my laptop will never use, as I was getting sick of wasting lots of disk space on the billion+ device modules in Linux these days.

After finding that my initial kernel lacked support for my video card (turns out the Lenovo X201i laptops still use AGP-based i915 cards, I was assuming PCIe) I got a working kernel up and running.

Except that my Wi-Fi stability problem was worse than ever, instead of losing connectivity every few hours, it was now doing so ever few minutes. :-(

The logs weren’t particularly helpful – NetworkManager likes to give reason numbers but I couldn’t easily find a documented explanation of these (but maybe I’m looking in the wrong place).

19:44:36 NetworkManager[1650]: <info> (wlan0): device state change: 8 -> 9 (reason 5)
19:44:36 NetworkManager[1650]: <warn> Activation (wlan0) failed for access point (b201)
19:44:36 NetworkManager[1650]: <warn> Activation (wlan0) failed.
19:44:36 NetworkManager[1650]: <info> (wlan0): device state change: 9 -> 3 (reason 0)
19:44:36 NetworkManager[1650]: <info> (wlan0): deactivating device (reason: 0).
19:44:36 NetworkManager[1650]: <info> (wlan0): canceled DHCP transaction, DHCP client pid 3354
19:44:36 kernel: [  391.070772] wlan0: deauthenticating from 00:0c:42:67:8b:bc by local choice (reason=3)
19:44:36 kernel: [  391.185461] wlan0: moving STA 00:0c:42:67:8b:bc to state 2
19:44:36 kernel: [  391.185466] wlan0: moving STA 00:0c:42:67:8b:bc to state 1
19:44:36 kernel: [  391.185470] wlan0: moving STA 00:0c:42:67:8b:bc to state 0
19:44:36 wpa_supplicant[1682]: CTRL-EVENT-DISCONNECTED - Disconnect event - remove keys
19:44:36 NetworkManager[1650]: <error> [1337240676.376011] [nm-system.c:1229] check_one_route(): (wlan0): \
         error -34 returned from rtnl_route_del(): Netlink Error (errno = Numerical result out of range)
19:44:36 kernel: [  391.233344] cfg80211: Calling CRDA to update world regulatory domain
19:44:36 avahi-daemon[1633]: Withdrawing address record for 192.168.1.11 on wlan0.
19:44:36 avahi-daemon[1633]: Leaving mDNS multicast group on interface wlan0.IPv4 with address 192.168.1.11.
19:44:36 avahi-daemon[1633]: Interface wlan0.IPv4 no longer relevant for mDNS.
19:44:36 avahi-daemon[1633]: Withdrawing address record for 2407:1000:1003:99:226:c7ff:fe66:b822 on wlan0.
19:44:36 avahi-daemon[1633]: Leaving mDNS multicast group on interface wlan0.IPv6 with address 2407:1000:1003:99:226:c7ff:fe66:b822.
19:44:36 NetworkManager[1650]: <info> (wlan0): writing resolv.conf to /sbin/resolvconf
19:44:36 avahi-daemon[1633]: Joining mDNS multicast group on interface wlan0.IPv6 with address fe80::226:c7ff:fe66:b822.
19:44:36 avahi-daemon[1633]: Registering new address record for fe80::226:c7ff:fe66:b822 on wlan0.*.

So I proceeded to debug:

  1. Cursed and wished my 300m spool of Cat6 ethernet wasn’t in Wellington.
  2. Rolled back the microcode update – my initial thought was that the new code was making the card unstable and the result was the card dropping the connection and NetworkManager doing the clean up.
  3. Did a full power down to make sure that the microcode wasn’t remaining active on the card across reboots (had this problem once with a dodgy GPU once).
  4. Verdict: Microcode upgrade was OK, must be something else.
  5. Upgraded NetworkManager from 0.8.1 to 0.8.4 from Debian Backports – 0.8.1 isn’t too recent, was tempted to try 0.9 series but would have required a lot more backporting work.
  6. Verdict: Appears not to be a NetworkManager issue in the 0.8 series – maybe something fixed in 0.9 or later?
  7. Upgraded wpasupplicant from 0.6.10 to 1.0 by manual backport from unstable – the activation error made me consider it might have been a bug with newer kernels & wpasupplicant’s AP negotiation.
  8. Verdict: No change to the issue.
  9. Built a Linux 3.3 kernel with the older less-crashy 3.2 iwlwifi driver to see if it was driver specific, or otherwise-kernel related.
  10. Verdict: Same issue continued to occur, rolling back driver version infact made no change – something about the 3.3 kernel itself was the problem.
  11. Got suspicious about NetworkManager – either it or the kernel had to be at fault, one possibility was some weird API breakage with the age gap between the software versions being used. The kernel is *usually* pretty solid and something like wifi drivers dropping every couple of minutes would be a pretty serious bug to get through, so I looked through the logs to see if I could get anything more useful with NetworkManager’s logs.
  12. Spotted a kernel error “ICMPv6 RA: ndisc_router_discovery() failed to add default route.“. This error tended to occur shortly before any WiFi disconnection occurred, but not immediately so.
  13. Found an entry in Red Hat’s bugzilla.
  14. And then the upstream bug fix from 19th April.

Turns out that the Linux 3.3 kernel and NetworkManager fight over which one is going to control the default route for each router advertised link – the kernel adds one, Network Manager removes and then the kernel gets upset and drops all router advertisements.

In hindsight, I should have spotted it sooner, but I had discarded the RA statement from being related initially as the disconnection often didn’t happen till a minute of two after the log entry occurred – eg:

19:51:40 kernel: [  814.274903] ICMPv6 RA: ndisc_router_discovery() failed to add default route.
19:52:47 NetworkManager[1650]: <info> (wlan0): device state change: 8 -> 9 (reason 5)

What’s interesting about this bug, is that at first reading it explains a loss of IPv6 connectivity perfectly – however it doesn’t explain why IPv4 or the Wi-Fi connection itself was impacted.

The reason this happened, is that NetworkManager was set to have IPv6 as a requirement for that connection to be established – in the event of IPv6 not working, NetworkManager would consider the interface to be down, even if IPv4 was up.

There is a good reason for this, that the developers detailed on their (excellently written) blog, explaining that by having NetworkManager check for IPv6, it allows applications to be written smarter to better understand their level of connectivity.

For users of the NetworkManager 0.9 series, there’s a patch already committed which you can grab here and I would expect the next NetworkManager update will have this fix.

If you’re on the NetworkManager 0.8 series, this patch won’t apply cleanly – I might make some time to go and backport it, but you can workaround it for now by using the Ignore method so that NetworkManager does nothing and leaves it up to the Linux kernel in the background to negotiate IPv6 addressing.

Breaking vs Working Network Manager Settings

Of course if you’re not connecting to any IPv6 capable networks, you don’t have anything to worry about (other than the fact you’re still stuck in the 20th century).

 

Initially I was a bit annoyed at NetworkManager for being so silly as to drop the whole interface when just one of the two networking stacks was broken, however after thinking about it for a bit, it does make some sense as to why it chose that behavior – often most interface issues can be fixed by reconnecting – maybe the AP got rebooted, maybe the laptop just moved between two of them, etc – a reconnect can solve many of these.

But a smarter approach, would be to determine whether network issues are layer 2 or layer 3 – if it’s just a layer 3 issue, then there’s little need to drop the Wi-Fi connection itself, instead attempt to re-establish IPv4 or IPv6 connectivity where appropriate, and if unable to do so, use the notifications to tell the user that “IPv6 connectivity is experiencing a problem, some hosts and services may be unreachable”.

It’s actually something that Windows does semi-OK – it figures out roughly how borked a user’s connection is and then does a balloon popup stating that there’s limited connectivity or IP conflict, or some other sometimes helpful message.

This may be better in newer versions of NetworkManager, I’ll have to have a play with a more recent release and see.

Early Morning Auckland

My good friend @LGnome was transiting via Auckland and had a day to spend up here to see the sights. Naturally I delivered with one near side-swipe, two cars running a red light right in front of us and congested roads.

I also had to get up early (06:00) to get to the airport, before heading into the CBD to get some decent breakfast and coffee and took a few early morning pics – it’s amazing how much nicer Auckland is earlier in the morning when the roads are dead.

Because Shaky Isles doesn’t seem to open before 08:00 on a Sunday, we went for a wander around Auckland for a bit first and I got a few decent pics with my trusty professional grade photographer setup.

Good morning Mr Sun!

I do like Wynyard Quarter's mix of resturants and industry, get some pretty big ships in there at times.

"Gateway To The Cloud" (punny since the Sky Tower is one of NZ's main network exchanges)

No early morning is complete without coffee from Shaky Isles. :-D

Up in Mt Victoria, not a lot of traffic (car or boat) early Sunday morning.

Pimping my ride with high pitch painful sounds

I got my car back from the repair shop on Friday following it’s run in with the less pleasant residents of Auckland, with all the ignition and dash repaired.

Unfortunately the whole incident costs me at least $500 in excess payments, not to mention future impacts to my insurance premiums, so I’m not exactly a happy camper, even though I had full insurance.

Because I really don’t want to have to pay another $500 excess when the next muppet tries to break into it, I decided to spend the money to get an alarm installed, to deter anyone trying to break in again – going to all the effort to silence an alarm for a 1997 Toyota Starlet really won’t be worth the effort, sending them on to another easier target.

(I did consider some of those fake stickers and a blinky LED, but a real alarm does mean that if you hit the car, you’ll quickly get a chirp confirming there is an alarm present. Plus I get one of those chirpy remote controls to unlook the doors! :-D)

I do really hate car alarms, but it’s worth it to have something that will send anyone messing with my car running before it wakes up half the apartment complex.

I wanted to get a decent alarm installed properly and ended up getting referred to Mike & Lance at www.carstereoinstall.co.nz who do onsite visits to install which was really handy, and totally worth it after seeing all the effort needed to do the installation.

Car electronics spaghetti! Considering this is a pretty basic 1997 car, I'd hate to think what the newer ones are like...

There’s a bit of metal drilling, cable running, soldering, un-assembling parts of the car’s interior and trying to figure out which cables control which features of the car, all up it took two guys about 2 hours to complete.

Cost about $325 for the alarm and labor, plus an extra $40 as they had to run wires and install switches for the boot, which is pretty good when you consider it’s a 4 man hour job, would have taken all day if I’d done it at noob pace.

Would recommend these guys if you’re in Auckland. As an extra bonus, Mike turned out to be an ex-IT telco guy so we had some interesting chats – NZ is such a small world at times :-/

Up Mt Kaukau

When I was in Wellington last month I caught up with my good mate Tom (of #geekflat fame) and we decided to go for a wander up Mt Kaukau with Tom’s friend Nicola.

I spent most of my years in Wellington focusing on the CBD and southwards, so Johnsonville, Khandallah and it’s surrounding walks are quite new to me.

We took the route up from Johnsonville, going up to the peak and then back down into Khandallah side, before walking back through the suburbs, near the rail line, to Johnsonville.

The Wellington City Council has a good map of the Northern Walks available for download showing the route, I also quickly whipped up a rough Google map of the start & exit points I took along with the route diagram. I should really record more GPS accurate tracks with my phone, but that stuff loves chewing up the battery quickly so not always possible.

Starting out climb up....

It's a @macropiper! By a tunnel! (Turns out this tunnel is for the old water reservoir pipe).

TV transmission tower in the distance - it's visible clearly down on street level in Johnsonville and looks a long way away from there - not really too hard getting there though.

It's Welly! So pretty!

Uh-oh, what has Tom found?

Not a kitteh!

Will these landmark TV transmission towers still be relevant in 25 years time after everything has been replaced with IP over fibre?

I love this city!

Wellington suburbs lapping at the foothills.

Harbour view, love the trail of the turning cargo ship.

Panorama view over the harbour, CBD, surburbs and out towards Makara in the far right. Not very visable is the large wind farm out that way. Pictures don't really do the view from up here justice.

Anyone know what this weird tree is?

Johnsonville rail line

It was a pretty good walk all up, not to long or taxing, but with a rewarding view and an excuse to wander through the suburbs for the first time.

We came across a few promising looking cafes hidden in weird places in the suburbs whilst on the return walk, if I have more time in Wellington again soon I wouldn’t mind checking a few of them out, particularly one which was busy pulling home made pies out of the oven….

If you take a look at the council map for the Northern Walkway, it’s actually possible to walk all the way from Johnsonville to the Botanic gardens, staying mostly in parks with a few detors through streets. This route is also part of the Te Araroa walk, so good practice for me for when I’m ready to do it. :-)

Takapuna to Devonport

Working from home for the past 7+ months has left me with strong urges to get out and about on the weekends, least I go crazy from being coped up inside – whilst my inner geek urges to sit infront of my laptop and code are strong, getting outside for a walk, seeing new places and new people always puts me in a better mind set for when I get home to do a large coding session in the evening afterwards. ;-)

The last two weekends I’ve done the Takapuna to Devonport (Green Route) walk, a pathway I discovered purely by chance whilst walking to Devonport along the main road route due to an entrance onto a park just at the start of the memorial WW2 tree-lined road half-way in my journey.

It takes you through a number of parks that I didn’t even know existed, over the marsh lands and through some of the older streets towards Devonport with characteristic turn of the century houses (Devonport being established as a suburb around 1840 and one of Auckland’s older suburbs).

There’s a handy map you can download from the council here and the whole route is walk & cycle safe. It’s certainly the better route to take, the road route between Takapuna and Devonport should be avoided at all costs, considering it’s always congested and overloaded with traffic, as there is only one road route from Devonport all the way up to Takapuna in order to get onto the motorway.

Having made the mistake of trying to drive to Devonport once before, I’d avoid it at all costs, you’d get from Devonport to Takapuna faster by taking the ferry to Britomart and bus from there IMHO, nose-to-tail traffic the whole way on a Sunday evening isn’t that fun, not to mention a nightmare finding car parking in Devonport itself.

Traffic backed up from the Esmond Rd - Lake Rd junction. It's like this for a good suburb or two, even on weekends. :-/

The sane way for non-car loving Aucklanders to get around.

The route signage is pretty good, although I found that whilst Devonport-to-Takapuna was almost perfect in directional signage, the Takapuna-to-Devonport approach has a few bits that are a little confusing if you hadn’t done it the other way first.

There’s also a complete nightmare in terms of cycle vs pedestrian marking, something that the North Shore City Council loves doing, such as alternating conventions of left vs right side for cyclists – something I’ll cover in a future post. :-/

The route doesn’t seem particularly busy, most of the activity I saw was with people in the various parks the route crosses through, rather than others completing the same route as me – I expect the length detours them a bit (took me around 1.5hrs).

Starting from Takapuna/Esmond road, the route is firstly though the newer suburbs of Takapuna, with a weird suburban/industrial mix of some lovely power pylons running along the street.

Ah, the serenity! :-D

TBH, Takapuna suburbs bore me senseless, they’re a giant collection of 1970s-2012 housing projects, very American-dream type feel at times. Thankfully one soon escapes to the parks and walkways along the marshy coast.

Marshy land, Auckland Harbour bridge in the distance.

One of several boardwalks so you won't get your feet/wheels muddy - unless you want to. :-)

Long bridge is long! (kind of reminds me of Crash Bandicot's Road to Nowhere). If the ground is dry, you could brave cycling alongside it through the marsh, few tracks suggesting this is somewhat popular.

The route slowly starts getting more parks and greenery, with small intermissions of going back along suburb streets, before rejoining more natural routes.

Got a skateboard? And a hoodie? This is the place for you to hang in this otherwise quite empty grassy field called a park.

/home/devonport_residents/.Trash/ (that's a recycling bin joke for you windows users!)

Once you come out of the park, you end up walking through a few blocks of Devonport’s residential area, before coming out onto the main street and along to the shopping and cafe area.

An old church, where Aucklanders worship their god "Automobile".

I quite like Devonport, it has a good number of cafes, bars, the waterfront, classic architecture (not bland corporate crap like Takapuna) and generally has charm.

If I was going to live in Auckland long term, I’d seriously consider Devonport as a good place to have house, I’d even consider not bothering with a car, depending on the availability of a good close supermarket.

Of course this assumes working in the CBD or from home, so you can just take the ferry into the CBD, rather than needing to mess around with commuting up to the motorway and into the city everyday. If a car-based commute is vital, you might want to do Devonport a favor and go live in a less classy suburb with closer motorway access.

Knitted handrails! This place has style!

Vertical water accelerator.

I stopped for a coffee at one of the several cafes around the main street with an outside area and was pleasantly surprised for a change – I didn’t even see a Starbucks there!

The local residential population appears to have a lot of members of the baby boomer generation and either residential or visiting families attracted to the parks and waterfront.

As I was there, I decided to make the short climb up Mt Victoria (*curses settlers who named about 50 million places in NZ Mt Victoria*) and get a good look out over the area. In typical Auckland fashion, it is entirely possible to drive right up to the top, or take a segway tour, but despite the name it’s really just a medium sized hill, nothing compared to Wellington stuff.

Looking out towards Okahu & Mission Bay. Start to get an idea why Auckland is the "City of Sails".

Our old friend Rangitoto island again. Incidentally, Mt Victoria itself is also a volcano, just not anywhere nearly as large.

Looking out over houses towards North Head,

Auckland CBD

Panorama out towards Rangitoto

Panorama showing Auckland CBD on left, Devonport centre and Takapuna in the horizon on the right.

I didn’t know anything about it other than it was a big hill, so damn I was going to climb and conquer that, but it turns out it was part of Auckland’s early military history with a large disappearing gun (BL 8 inch Mk VII naval gun) which was installed in 1899, well before WW2 – seems NZ has a number of good examples of these interesting pre-WW1 weapons.

The magical disappearing cannon!

Fuck being the poor suckers who had to lug this all the way up the hill. :-/

Mushroom vents hint to a large underground complex - sadly closed to the public.

One thing I missed is the other large hill in the area – North Head – which offers a much larger selection of 1800’s – WW2 relics including tunnels and additional guns which are open to the public.

Devonport has had a long military history and is where the main naval base of New Zealand, dating back to 1841, usually has a couple ships berthed to look at – or sometimes coming/going offering some neat photo opportunities.

I tend to find that Auckland really hides it’s interesting stuff, I lived in Takapuna for months before I discovered the existence of many of these interesting walkways and sights, in many cases they just aren’t advertised and from a distance, you don’t get an idea of how interesting some of these places can be. (Mt Victoria and North Head look just like plain hills with some sheds on them from sea level).

That’s why I love exploring on foot, find so many gems, look them up online, find another 5 related ones to go and check out. :-) And don’t be afraid to take random interesting looking paths to see where they lead, it’s how I find many places – including many of Wellington’s paths and walkways.

 

After the trip up Mt Victoria, I wandered back down and along the waterfront – turns out it’s a fantastic place to get close up shots of any large ships passing by.

Rena-sized cargo ship, gives an idea how massively large these things are when seeing up close. See the little speedboat to the right for an idea of the size difference. :-D

I ended up heading to the ferry terminal to get the ferry over to Britomart to catch up with friends, only took less than 15mins to board and cross over the harbor for $6. (frequent traveler discounts available).

This is Fuller Ferry, requesting Devenport wharf command center to lower defence grid for safe docking.

Cruising in to the Britomart ferry terminal, past the Rugby Word Cup "Cloud" event center.

Finally wrapped up the day with a delicious coffee and snack at my much loved Shakey Isles before they closed (closing time is 17:00 on weekends FYI).

Om nom nom (totally not addicted to chocolate)

If you don’t live in Takapuna and want to reproduce this walk, I’d recommend taking the Northern Express (NEX) bus to Akoranga Station, or the normal Takapuna buses to the shopping center, doing the walk to Devonport and then ferry back into Britomart.

It’s an easy day trip and could be as short as 3-4 hrs or as long as an entire day depending what sights and coffee you decide to partake in whilst at Devonport.

The other approach is to do Takapuna – Devonport & return, something that might appeal particularly if wanting to do it by bike rather than foot, there’s a bit more parking around Takapuna, particular Fred Thomas drive area near Akaranga Station to drive to with your bikes.

Shared data at last!

New Zealand is always pretty pricey for telecommunications, particular mobile services including data, something I’ve mentioned before when detailing my move from Vodafone to 2degrees mobile.

Whilst data has slowly been getting cheaper, there’s been an emerging problem of people owning multiple 3G capable devices, such as a mobile phone, tablet and a laptop and wanting to get them all online, but not wanting to pay an expensive data plan for each device.

I personally ended up just using tethering on my phone to connect my laptop, even though my laptop has a built-in 3G modem, simply due to the high cost of maybe once-a-week usage of my laptop not being enough to justify having a separate plan, particularly when only needing a few hundred MB at most.

However 2degrees have now just announced new shared data plans, where you pay an extra $5 a month, plus $1 per device to be able to share the data plan from another account (for up to 5 devices).

This is pretty compelling, I can go buy a bunch of prepaid SIMs for my laptop, spare phones, USB 3G stick and be able to use them all with my shared data at a fraction of the cost compared to doing so on Telecom or Vodafone, whom are going to need to up their game if they want to retain multi-device users.

2degrees certainly aren’t sitting still – in the past week they’ve announced both this shared data service as well as their Touch2Pay arrangement with Snapper, to allow the use of smartphones with NFC for payment on the Snapper network (used for Wellington and Auckland buses), future is looking bright for them.

Fixing Blogging

I’m finding an increasing number of friends and people using services like Tumblr or Google Plus as blogging services, or at least as a place to make posts that are more detailed an indepth than typical micro-blogging (aka Twitter/Facebook).

The problem with both these services, is that they deny interaction from external users who aren’t registered with their service.

With traditional blogging platforms such as WordPress, Blogger, or other custom developed blogs, any visitor to the blog could read it and post comment – the interfaces would vary, the ease of posting would vary and the method of validation of posting would vary, but you could 99% of the time still be able to post comments and engage with the author.

This has not been the case with social networks to date – platforms like Twitter or Facebook require a user to be logged in, in order to communicate with others – however this tends to work OK, since they’re mostly used for person-to-person messages and broadcasting, rather than detailed posts you will sent to users outside of those networks (after all 140 char tweets aren’t exactly where you’ll debate things of key meaning).

The real issue starts with half-blog, half-microblog services such as Tumblr and Google Plus, which users have started to use for anything from cat pictures to detailed Linux kernel posts, turning these tools into de-facto blogging platforms, but without the freedom for outsiders to post comments and engage in conversation.

 

Tumblr is one of the worst networks, as it’s very much designed as a glorified replacement for chain email forwards – you post some text or some pictures and all your friends “reblog” your page if they like it and users all pat themselves on their back at how witty and original they all are.

But to make a comment, one must reblog the post, add a comment and have it end up in the pages long list of reblog and like statements at the bottom of the post. And if the original poster wants to comment on that, you’d have to re-blog their blog. :-/

Yo dawg, we heard you like to reblog your reblogging.

The issue is that more people are starting to use it for more than just funny cat pictures and treat it as a replacement to blogging, which makes for a terrible time engaging with anyone. I have friends who use the service to post updates about their lives, but I can’t engage back – makes me feel like some kind of outcast stalker peering through the windows at them.

And even if I was on Tumblr, I’d actually want to be able to comment on things without reblogging them – nobody else cares if Jane had a baby, but I’d like to say “Congrats Jane, you look a lot less fat now the fork()ed process is out” to let my friend know I care.

Considering most Tumblr users are going to use Facebook or Twitter as well, they might as well use the image and short statement posting features of those networks and instead use an actual blog for actual content. Really the fault is due to PEBKAC – users using a bad service in the wrong way.

 

Google Plus is a bit better than Tumblr, in the respect that it actually has expected functionality like posts you can comment on, however it lacks the ability for outsiders to post comments and engage with the author – Google has been pretty persistent with trying to get people to sign up for an account, so it’s to be expected somewhat.

I’ve seen a lot of uptake with Google Plus by developers and geeks, seemingly because they don’t want the commitment of actually using a blog for detailed posts, but want somewhere to post lengthy bits of test.

Linus Torvalds is one particular user whom I might want to follow on Google Plus, but there’s not even RSS if you wanted to get updates on new posts! (To get RSS, you’d have to use external thirdparty services).

Tumblr at least has RSS so I can still use it in my reader like everything else, even if I can’t reply to the author….

Follow Linus! Teenage fanboy Jethro squeee!

And of course with no ability for posting comments by outsiders, I can’t post Linus comments requesting his hand in marriage after merging a kernel bug fix for my laptop. :-(

 

So with all these issues, why are users adopting these services? After all, there are thousands of free blogging services, several well known and very good ones, all better technical options.

I think it’s a combination of issues:

  • Users got overwhelmed by RSS – we followed everything we loved, then got scared by the 10,000 unread posts in our readers – and resolved by simply not opening the reader in fear of the queue waiting for us. The social media style approaches used by Google+, Tumblr and of course Twitter and Facebook focus less on following every single post by users, but rather what’s happening here and now – users don’t feel bad if they miss reading 1,000 posts overnight, they just go on to the next.
  • Users love copying. The MPAA & RIAA love this fact about humans, we love to copy and share stuff with others. Blogging culture tends to frown on this, but Tumblr’s reblogging style of use makes it more acceptable and maintains a credit trail.
  • Less commitment – if I started posting pictures of funny cats or one paragraph posts on this blog, it wouldn’t be doing it justice or up to the level of quality readers expect. However on social network based services, this is OK, there’s no expectation of a certain level of presentation and effort into a post. A funny cat picture followed by the post about you raging about by GNU Hurd will always be better than BSD is acceptable – on a blog, you’d drop the funny cat and be expected to write a well detailed post explaining your reasoning. Another label would be that it’s “more casual”, than conventional blogs.
  • Easier interactions with your readers (at least with Google+) – there’s no standards with blogging for handling notifications to users about changes to your blog or replies to comments. Even WordPress, one of the most popular platforms, doesn’t provided native email notifications to comments.
  • I noticed a major improvement in the level of interaction between myself and my readers after adding Subscribe to Comments Reloaded plugin to this site, using email notifications to users about replies to my blog post. And considering how slack many people are with checking their email, I do wonder how much better it would be if I added support for notification to new posts and comment replies via Twitter or Facebook.
  • Conventional blogs tend to take a bit more effort to post comments, some go overboard with captcha input fields that take 10 attempts or painful comment validation. I’ve tried to keep mine simple with basic fields and dealing with spam using Akismet rather than captcha (which has worked very well for me).

In my opinion the biggest issue is the communication, notification and interaction issue as noted above. I don’t believe we can fix the cultural side of users such as the crap they post or the inability to actually make the effort to read their RSS but we can go someway towards improving the technology to reduce/eliminate some of the pain points, to encourage use of the services.

There have been some attempts to address these issues already:

  • Linkback techniques such as Pingback address the issue of finding out who’s linking to your blog (although I turned this off as I found it really spammy and I get that information out of awstats anyway).
  • RSS handles getting updates of new posts on a polling basis and smarter RSS readers offer better filtering/grouping/etc.
  • Email notifications for blog comments and updates.

But it’s not good enough yet – what I’d actually like to see would be:

  • Improvement of linkback techniques to spam pages less, potentially with the addition of some AI logic to determine whether the linkback was just “check out this cool post!” or some actual useful content that readers of your post would like to read (such as a rebuttal).
  • Smarter RSS readers that act more like social network feeds, to give users who want more of a “live stream” feel what they want.
  • Live commenting technology – not all users have push email, so email notifications kind of suck for many users. A better solution would be to use the existing XMPP standard to send notifications to the user’s XMPP server (anyone using Gmail already has an XMPP service with them and numerous geeks run their own – like me ;-), so the user gets a chat message pop up. If the message format was standardized, it would be possible to have the IM client recognize it was a blog comment reply and to hand off to the installed RSS reader to handle for better UX – or fall back to posting text with a link to the reply for support with any XMPP standard client.
  • (I did see that there is an outdated plugin for XMPP on WordPress  as well as some commercial live-commenting packages that hook into social networks, but I really want a proper open source solution that does everything in one plugin, so there’s a more seemless UX – rather than having 20 checkboxes for which method the user would like notifications via.)
  • Whilst mentioning XMPP, we could even consider replacing RSS with XMPP based push notifications – blog servers sending out a push message when they get an update, rather than readers polling services. Advantage is near-instant update of new posts and potentially less server load of not having thousands of wasted polls when there hasn’t been any update to fetch.
  • Comment reply via notification support. If you send someone an XMPP IM, email, tweet, virtual sheep or whatever to alert to a comment or blog post, they should be able to reply via that native medium and have the blog server interpret, validate and integrate that reply into the page.

My hope is that with these upgrades, blogging platforms will extend themselves to be better placed for holding up against social networking sites, making it easier to have detailed conversations and long running threads with readers and authors.

Moving to a new generation communication platform build around the existing blogging platforms would be as much of an improvement for real time social responsiveness as shifting from email to Twitter and hopefully, the uptake in real time communications will bring more users back to decentralised, open and varied platforms.

I’m tempted to give this a go by building a WordPress plugin to provide unified notifications using XMPP / Email / Social Media, but it’ll depend on time (lol who has that??) and I haven’t done much with WordPress’s codebase before. If you know of something existing, I would certainly be interested to read about it and I’ll be taking a look at options to build upon.