Tag Archives: open source

All posts relating to Open Source software, mostly but not exclusively UNIX focused.

SMStoXMPP

Having moved to AU means that I now have two cell phones – one with my AU SIM card and another with my NZ SIM card which I keep around in order to receive the odd message from friends/contacts back home and far too many calls from telemarketers.

I didn’t want to have to carry around a second mobile and the cost of having a phone on roaming in AU makes it undesirably expensive to keep in touch with anyone via SMS messaging, so went looking for a solution that would let me get my SMS messages from my phone to my laptop and phone in a more accessible form.

I considered purchasing an SMS gateway device, but they tend to be quite expensive and I’d still have to get some system in place for getting messages from the device to me in an accessible form.

Instead I realised that I could use one of the many older Android cellphones that I have lying around as a gateway device with the right software. The ability to run software makes them completely flexible and with WiFi and 3G data options, it would be entirely possible to leave one in NZ and take advantage of the cheaper connectivity costs to send SMS back to people from within the country.

I was able to use an off-the-shelf application “SMS Gateway” to turn the phone into an SMS gateway, with the option of sending/receiving SMS messages via HTTP or SMTP/POP3.

However emails aren’t the best way to send and reply to SMS messages, particularly if your mail client decides to dump in a whole bunch of MIME data. I decided on a more refined approach and ended up writing a program called “SMStoXMPP“.

Like the name suggestions, SMStoXMPP is lightweight PHP-based SMS to XMPP (Jabber) bi-directional gateway application which receives messages from an SMS gateway device/application and relays them to the target user via XMPP instant messages. The user can then reply via XMPP and have the message delivered via the gateway to the original user.

For me this solves a major issue and means I can leave my NZ cell phone at my flat or even potentially back in NZ and get SMS on my laptop or phone via XMPP no matter where I am or what SIM card I’m on.

smstoxmpp_layout

To make conversations even easier, SMStoXMP does lookups of the phone numbers against any CardDAV address book (such as Google Contacts) and displays your chosen name for the contact. It even provides search functions to make it even easier to find someone to chat to.

Chatting with various contacts via SMStoXMPP with Pidgin as a client.

Chatting with various contacts via SMStoXMPP with Pidgin as a client.

I’ve released version 1.0.0 today, along with documentation for installing, configuring gateways and documentation on how to write your own gateways if you wish to add support for other applications.

Generally it’s pretty stable and works well – there are a few enhancements I want to make to the code and a few bits that are a bit messy, but the major requirements of not leaking memory and being reliably able to send and receive messages have been met. :-)

Whilst I’ve only written support for the one Android phone base gateway, I’m working on getting a USB GSM modem to work which would also be a good solution for anyone with a home server.

It would also be trivial to write in support for one of the many online HTTP SMS gateways that exist if you wanted a way to send messages to people and didn’t care about using your existing phone number.

 

Don’t abandon XMPP, your loyal communications friend

Whilst email has always enjoyed a very decentralised approach where users can be expected to be on all manner of different systems and providers, Instant Messaging has not generally enjoyed the same level of success and freedom.

Historically many of my friends used proprietary networks such as MSN Messenger, Yahoo Messenger and Skype. These networks were never particularly good IM networks, rather what made those networks popular at the time was the massive size of their user bases forcing more and more people to join in order to chat with their friends.

This quickly lead to a situation where users would have several different chat clients installed, each with their own unique user interfaces and functionalities in order to communicate with one another.

Being an open standards and open source fan, this has never sat comfortably with me – thankfully in the last 5-10yrs, a new open standard called XMPP (also known as Jabber) has risen up and had wide spread adoption.

500px-XMPP_logo.svg

XMPP brought the same federated decentralised nature that we are used to in email to instant messaging, making it possible for users on different networks to communicate, including users running their own private servers.

Just like with email, discovery of servers is done entirely via DNS and there is no one centralised company, organisation or person with control over the system -each user’s server is able to run independently and talk directly to the destination user’s server.

With XMPP the need to run multiple different chat programs or connect to multiple providers was also eliminated.  For the first time I was able to chat using my own XMPP server (ejabberd) to friends using their own servers, as well as friends who just wanted something “easy” using hosted services like Google Talk which support(ed) XMPP, all from a single client.

Since Google added XMPP into Google Talk, it’s made the XMPP user base even larger thanks to the strong popularity of Gmail creating so many Google Talk users at the same time. With so many of my friends using it, is has been so easy to add them to my contacts and interact with them on their preferred platform, without violating my freedom and losing control over my server ecosystem.

Sadly this is going to change. Having gained enough critical mass, Google is now deciding to violate their “Don’t be evil” company moral and is moving to lock users into their own proprietary ecosystem, by replacing their well established Google Talk product with a new “Hangouts” product which drops XMPP support.

There’s a very good blog write up here on what Google has done and how it’s going to negatively impact users and how Google’s technical reasons are poor excuses, which I would encourage everyone to read.

The scariest issue is the fact that a user upgrading to Hangouts get silently disconnected from being able to communicate with their non-Google XMPP using friends. If you use Google Talk currently and upgrade to Hangouts, you WILL lose the ability to chat with XMPP users, who will just appear as offline and unreachable.

It’s sad that Google has taken this step and I hope long term that they decide as a company that turning away from free protocols was a mistake and make a step back in the right direction.

Meanwhile, there are a few key bits to be aware of:

  1. My recommendation currently is do not upgrade to Hangouts under any circumstance – you may be surprised to find who drops off your chat list, particularly if you have a geeky set of friends on their own domains and servers.
  2. Whilst you can hang onto Google Talk for now, I suspect long term Google will force everyone onto Hangouts. I recommend considering new options long term for when that occurs.
  3. It’s really easy to get started with setting up an XMPP server. Take a look at the powerful ejabberd or something more lightweight like Prosody. Or you could use a free hosted service such as jabber.org for a free XMPP account hosted by a third party.
  4. You can use a range of IM clients for XMPP accounts, consider looking at Pidgin (GNU/Linux & Windows), Adium (MacOS) and Xabber (Android/Linux).
  5. If you don’t already, it’s a very good idea to have your email and IM behind your own domain like “jethrocarr.com”. You can point it at a provider like Google, or your own server and it gives you full control over your online identity for as long as you wish to have it.

I won’t be going down the path of using Hangouts, so if you upgrade, sorry but I won’t chat to you. Please get an XMPP account from one of the many providers online, or set up your own server, something that is generally a worth while exercise and learning experience.

If someone brings out a reliable gateway for XMPP to Hangouts, I may install it, but there’s no guarantee that this will be possible – people have been hoping for a gateway for Skype for years without much luck, so it’s not a safe assumption to have.

Be wary of some providers (Facebook and Outlook.com) which claim XMPP support, but really only support XMPP to chat to *their* users and lack XMPP federation with external servers and networks, which defeats the whole point of a decentralised open network.

If you have an XMPP account and wish to chat with me, add me using my contact details here. Note that I tend to only accept XMPP connections from people I know – if you’re unknown to me but want to get in touch, email is best at first.

Firefox Mobile for Android CAs

I’ve been using Firefox Mobile on Android for a while (thanks to the fact that it means I can use Firefox Sync between my laptop and mobile to share data). Overall it’s pretty good and the last few releases have fixed up a lot of the past stability issues and UI problems, it’s in a pretty decent state now.

One of the unfortunate problems I’ve had with it until recently is that the application was refusing to import custom certificate authorities. Whilst Android has it’s own CA store, add on browsers (inc Firefox Mobile) can have their own CA stores and the manageability of these can vary a lot.

In the case of Firefox Mobile, the ability to manage certificates was not ported across from the desktop version, meaning that none of my web applications would validate against my custom CA.

However as a passable solution, it’s now possible to import the CA file by downloading a PEM version of the CA certificate in the browser. Just upload a copy of the PEM formatted certificate to a webserver and download the file with the browser to install.

Installing CAs into Firefox Mobile (PEM formatted file).

Installing CAs into Firefox Mobile (PEM formatted file).

Now the biggest problem left is sites and applications that have poorly written user agent detection and assume that the only mobile devices that possibly exist are devices that have the iPhone or stock Android user agent. :-( *glares at Atlassian in particular*

KVM instances dying at boot

I recently encountered a crashing KVM instance, where my VM would die at boot once the bootloader tried to unpack initrd.

A check of the log in /var/log/libvirt/qemu/vmname.log showed the following unhelpful line:

Guest moved used index from 6 to 229382013-04-21 \
06:10:36.029+0000: shutting down

The actual cause of this weird error and crash occurs when the host OS lacks disk space on the host server’s filesystems. In my particular case, my filesystem was at 96% full, so whilst the root user could write to disk, the non-root processes including Libvirt/KVM were refused writes.

I’m not totally sure why the error happens, all my VM disks are based on LVM volumes rather than the host root filesystem, I suspect the host OS disk is being used for a temporary file such as unpacking initrd and this need for a small amount of disk leads to this failure.

If you’re having this problem, check your disk space and add some Nagios alerting to avoid a repeat issue!

Android 4.2.2 Issues

Having just flown from Sydney AU to Christchurch NZ, my Galaxy Nexus suddenly decided to finally offer me the Android 4.2.2 upgrade.

Since I got the phone in 2012, it’s been running Android 4.1 – I had expected to receive Android 4.2 in November 2012 when it was released by Google since the Galaxy Nexus is one of Google’s special developers phones which are loved and blessed with official updates and source code.

However the phone has steadily refused to update and whilst I was tempted to build it from source again, seeing as 4.2 lacks any particular features I wanted (see release changes), there was little incentive to do so. However after 4.2.2 was magically revealed to me following changing countries, I decided was nagged to death to update and ended up doing so… sadly I wish I hadn’t….

 

Google have messed with the camera application yet again completely changing the UI –  the menu now appears where ever you touch the screen, which does make it easier to select options quickly in some respects, but they’ve removed the feature I use the most – the ability to jump to the gallery and view the picture you just took, so it’s not really an improvement.

Secondly the Android clock and alarm clock interface has been changed yet again – in some respects it’s an improvement as they’ve added some new features like stop watch, but at the same time it really does feel like they change the UI every release (and not always in good ways) and it would be nice to get some consistency, especially between minor OS revisions.

However these issues pale in comparison to the crimes that Google has committed to the lock screen…. Lock screens are fundamentally simple, after all, they only have one job – to lock my phone (somewhat) securely and prevent any random from using my device. As such, they tend to be pretty consistent and don’t change much between releases.

Sadly Google has decided that the best requirement for their engineering time is to add more features to the lock screen, turning it into some horrible borg screen with widgets, fancy clocks, camera and all sorts of other crap.

Go home lockscreen, you're drunk

Go home lockscreen, you’re drunk. So, so, drunk.

Crime 1 – Widgets

The lock screen now features widgets, which allow one to stick programs outside of the lockscreen for easy access (defeating much of the point of having a lock screen to begin with) and offering very limited real benefit.

Generally widgets serve very limited value, I use about 3 widgets in total – options for tuning on/off hardware features, NZ weather and AU weather. Anything else is generally better done within an actual application.

Widgets really do seem to be the feature that every cool desktop “must have” and at the same time, have to be one of the least useful features that any system can have.

 

Crime 2 – Horribly deforming the pattern unlock screen

With the addition of the widgets, the UI has been shuffled around and resized. Previously I could unlock by starting my swipe pattern from the edge of the device’s physical screen and drawing my pattern – very easy to do and quick to pick up with muscle memory.

However doing this same unlock action following the Android 4.2 upgrade, will lead to me accidentally selecting the edge of the unlock “widget” and instead of unlocking, I end up selecting a popup widget box (as per my screenshot) and then have to mess around and watch what I’m doing.

This has to the single most annoying feature I’ve seen in a long time purely because it impacts me every single time I pickup the phone and as a creature of habit, it’s highly frustrating.

And to top this off, Android now vibrates and makes a tone for each unlock point selected. I have yet to figure out what turns this highly irritating option off, I suspect it’s tied into the keyboard vibration/tone settings which I do want…

 

Crime 3 – Bold Clocks

We’ve had digital clocks for over 57 years, during which time I don’t believe anyone has ever woken up and said “wow, I sure wish the hours were bolder than the minutes”.

Yet somehow this was a good idea and my nicely balanced 4-digit 24-hour clock is unbalanced with the jarring harsh realisation that the clock is going to keep looking like a <b> tag experience gone wrong.

I’m not a graphical designer, but this change is really messing with my OCD and driving me nuts… I’d be interested to see what graphic designers and UX designers think of it.

 

So in general, I’m annoyed. Fucked off actually. It’s annoying enough that if I was working at Google, I’d be banging on the project manager’s door asking for an explanation of this release.

Generally I like Android – it’s more open than the competing iOS and Windows Mobile platforms (although it has it’s faults) and the fact it’s Linux based is pretty awesome… but with release I really have to ask… what the fuck is Google doing currently?

Google has some of the smartest minds on the planet working for them, and the best they can come up with for a new OS release is fucking lock screen widgets? How about something useful like:

  • Getting Google Wallet to work in more locations around the world. What’s the point of this fancy NFC-enabled hardware if I can’t do anything with it?
  • Improve phone security with better storage encryption and better unlock methods (NFC rings anyone?).
  • Improve backups and phone replacement/migration processes – backups should be easy to do and include all data and applications, something like a Timemachine style system.
  • Free messaging between Android devices using an iMessage style transparent system?
  • Fixing the MTP clusterfuck – how about getting some good OS drivers released?
  • Fix the bloody Android release process! I’m using an official Google branded phone and it takes 5 months to get the new OS release??

The changes made in the 4.2 series are shockingly bad, I’m at the stage where I’m tempted to hack the code and revert the lockscreen back to the 4.1 version just to get my workflow back… really it comes down to whether or not the pain this system causes me ends up outweighing the costs/hassle of patching and maintaining a branch of the source.

NamedManager 1.5.1

I’ve pushed a new release of NamedManager version 1.5.1, this release is a minor bug fix release providing:

  1. Bug fix for handling of TXT records, where extra slashes would be entered into the record due to an input validator bug.
  2. The Bind configuration writer now runs the Bind-supplied validators for configuration and DNS zone files and refuses to reload Bind without them passing

The first change is naturally important if you’re using TXT records as it does fix a serious issue with the handling of TXT records (no security problems, but corrupted zonefiles would result at times).

Even if you’re not using TXT records, the second change is worth upgrading to as it makes the Bind configuration generator much more robust and prevents any potential future bugs from ever feeding Bind a bad zonefile.

Pre-1.5.1, we relied on Bind’s reload process to validate the files, however this suffers an issue where the error might not be reported back to the user and they would only discover the issue next time Bind restarts. This changes prevents a new zonefile from being loaded into place until the validator passes it, so the worst case is your DNS just refuses to accept changes, whilst logging loudly in the web interface back to you. :-)

If you upgrade, take advantage of this feature, by adding the following to /etc/namedmanager/config-bind.php or wherever you have installed your Bind component configuration file to:

$config["bind"]["verify_zone"]    = "/usr/sbin/named-checkzone";
$config["bind"]["verify_config"]  = "/usr/sbin/named-checkconf";

NamedManager 1.5.1 can be found at the project page or in my packaged repositories.

Updated Repositories

I’ve gone and updated my GNU/Linux repositories with a new home page – some of you may have been using this under my previous Amberdms branding, but it’s more appropriate that it be done under my own name these days and have it’s own special subdomain.

I want to unify the branding of a bit more of the stuff I have out there on the internet and also make sure I’m exposing it in a way that makes it easy for people to find and use, so I’m going through a process of improving site templates, linking between places and improving documentation/wording with the perspective of viewing as an outside user.

CSS3 shinyness! And it even mostly works in IE.

Been playing with new HTML5/CSS3 functionality for this site, have to say, it’s pretty awesome.

You can check out the new page at repos.jethrocarr.com, I’ve tried to make it as easy as possible to add my repositories to your servers -I’ll be refining this a little more in coming weeks, such as adding a decent package search function to the site to make it easier to grab some of the goodies hidden away in distribution directories.

I’m currently providing packages for RHEL & clones, Debian and Ubuntu. Whilst my RHEL repos are quite sizable now, the Debian & Ubuntu repositories are much sparser, so I’m going to make an effort to bring them to a level where they at least have all my public software (see projects.jethrocarr.com) available as well tested packages for current Debian Stable and Ubuntu LTS releases.

There’s some older stuff archived on the server if you go hunting as well, such as Fedora and ancient RHEL version packages, but I’m keeping them in the background for archival purposes only.

And yes, all packages are signed with my Amberdms/Jethro Carr GPG signing key. You should never be using any repositories without GPG signed packages, since they’re ideal attack vectors to use to install malicious content with a man-in-the-middle attack otherwise.

ip6tables: ipv6-icmp vs icmp

I run a fully dual stacked IPv6+IPv4 network on my servers, VPNs and home network – part of this is that I get to discover interesting new first-adopter pains with living in the future (like Networkmanager/Kernel bugs, Munin being stupid, CIFS being failtastic and providers still stuck in the IPv4 only 1980s).

My laptop was experiencing frustrating issues where it was unable to load content from some IPv6 enabled website providers. In my specific case, I was having lots of issues with page loads from WordPress and Gravatar timing out when connecting to them via IPv6, but no issues when using IPv4.

I noticed that I was still able to ping6 the domains in question and telnet to port 80 successfully, which eliminates basic connectivity issues from being the cause. Issues like this where connectivity tests succeed, but actual connections fail, can be a symptom of MTU discovery issues which are a particularly annoying networking glitch to experience.

If you’re behind a WAN link such as ADSL, you’re particularly likely to be affected since ADSL and PPP overheads drop the size of the packets which can be used – in my case, I can only send a maximum of 1460 byte packets, whereas the ethernet default that my laptop will use is 1500 bytes.

In a properly functioning network, your computer will try and send 1500 byte packets to the internet, but the router which has the 1460 byte uplink to your ISP will refuse the packet and advise your computer that this packet is too large and that it needs to break it into smaller ones and try again. This happens transparently and is a standard feature of networking.

In a fucked up improperly functioning network, your computer will try and send the 1500 byte packet to the internet, but no notification advising the correct MTU size is returned or received. In this case your computer keeps trying to re-send the packet until a timeout occurs – from your computer’s perspective, the remote host is unreachable.

This MTU notification is performed by the ICMP protocol, which is more commonly but incorrectly known as being “ping” [whilst ping is one of the functions performed by ICMP, there are many other it’s responsible for, including MTU discovery and connection refused messages].

It’s not uncommon for MTU to be broken – I’ve seen too many system and network administrators block ICMP entirely in their firewalls “for security”, not realising that there’s a lot in ICMP that’s needed for proper operation of a network. What makes the problem particularly bad, is that it’s inconsistent and won’t necessarily impact all users, which leads to those administrators disregarding it as not being an issue with their infrastructure and even blaming the user.

Sometimes the breakage might not even be in a network you or the remote endpoint control – if there’s a router somewhere between you and the website you’re trying to access which has a smaller MTU size and blocks ICMP, you may never receive an MTU notification and you lose the ability to connect to the remote site.

At other times, the issue might be more embarrassing – is your computer itself refusing the helpful MTU notifications being supplied to it by the routers/systems it’s attempting to talk with?

I’m pretty comfortable with iptables and ip6tables, Linux’s IPv4 and IPv6 firewall implementations and use them for locking down servers, laptops as well as conducting all sorts of funky hacks that would horrify even the most bitter drugged up sysadmin.

However even I still make mistakes from time to time – and in my case, I had made a big mistake with the ICMP firewalling configuration that made me the architect of my own misfortune.

On my laptop, my IPv4 firewall looks something like below:

iptables -A INPUT -i lo -j ACCEPT
iptables -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
iptables -A INPUT -p icmp -j ACCEPT
iptables -A INPUT -j REJECT --reject-with icmp-host-prohibited
  • We want to trust anything from ourselves (duh) with -i lo -j ACCEPT.
  • We allow any established/related packets being sent in response to whatever connections have been established by the laptop, such as returned traffic for an HTTP connection – failure to define that will lead to a very unhappy internet experience.
  • We trust all ICMP traffic – if you want to be pedantic you can block select traffic, or limit the rate you receive it to avoid flood attacks, but a flood attack on Ethernet against my laptop isn’t going to be particularly effective for anyone.
  • Finally refuse any unknown incoming traffic and send an ICMP response so the sender knows it’s being refused, rather than just dropped.

My IPv6 firewall looked very similar:

ip6tables -A INPUT -i lo -j ACCEPT
ip6tables -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
ip6tables -A INPUT -p icmp -j ACCEPT
ip6tables -A INPUT -j REJECT --reject-with icmp6-adm-prohibited

It’s effectively exactly the same as the IPv4 one, with some differences to reflect various differences in nature between IPv4 and IPv6, such as ICMP reject options. But there’s one horrible, horrible error with this ruleset…

ip6tables -A INPUT -p icmp -j ACCEPT
ip6tables -A INPUT -p ipv6-icmp -j ACCEPT

Both of these are valid, accepted ip6tables commands. However only -p ipv6-icmp correctly accepts IPv6 ICMP traffic. Whilst ip6tables happily accepts -p icmp, it doesn’t effectively do anything for IPv6 traffic and is in effect a dud statement.

By having this dud statement in my firewall, from the OS perspective my firewall looked more like:

ip6tables -A INPUT -i lo -j ACCEPT
ip6tables -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
ip6tables -A INPUT -j REJECT --reject-with icmp6-adm-prohibited

And all of a sudden there’s a horrible realisation that the firewall will drop ALL inbound ICMP, leaving my laptop unable to receive many important messages such as MTU and rejected connection notifications.

By correcting my ICMP rule to use -p ipv6-icmp, I instantly fixed my MTU issues since my laptop was no-longer ignoring the MTU notifications. :-)

My initial thought was that this would be horrible bug in ip6tables, surely it should raise some warning/error if an administrator tries to use icmp vs ipv6-icmp. The man page states:

 -p, --protocol [!] protocol
    The  protocol of the rule or of the packet to check.  The speci-
    fied protocol can be one of tcp, udp, ipv6-icmp|icmpv6, or  all,
    or  it  can be a numeric value, representing one of these proto-
    cols or a different one.

So why is it accepting -p icmp then? Clearly that’s a mistake, it’s not in the list of accepted protocols…. but further reading of the man page also states that:

A protocol name from /etc/protocols is also allowed.

Hmmmmmmm…..

$ cat /etc/protocols  | grep icmp
icmp       1    ICMP         # internet control message protocol
ipv6-icmp 58    IPv6-ICMP    # ICMP for IPv6

Since /etc/protocols defines both icmp and ipv6-icmp as being known protocols by the Linux OS, ip6tables accepts the protocol argument of icmp without complaint, even though the kernel effectively will never be able to do anything useful with it.

In some respects it’s still a bug, ip6tables shouldn’t be letting users select protocols that it knows are wrong, but at the same time it’s not a bug, since icmp is a valid protocol that the kernel understands, it’s just that it simply will never encounter it on IPv6.

It’s a total newbie mistake on my part, what makes it more embarrassing is that I managed to avoid making this mistake on my server firewall configurations yet ended up doing it on my own laptop. Yet it’s very easy to do, hence this blog post in the hope that someone else doesn’t get caught with this in future.

linux.conf.au: day 5

Final day of linux.conf.au – I’m about a week behind schedule in posting, but that’s about how long it takes to catch up on life following a week at LCA. ;-)

uuuurgggh need more sleep

uuuurgggh need more sleep

I like that guy's idea!

I like that guy’s idea!

Friday’s conference keynote was delivered by Tim Berners-Lee, who is widely known as “the inventor of the world wide web”, but is more accurately described as the developer of HTML, the markup language behind all websites. Certainly TBL was an influential player in the internets creation and evolution, but the networking and IP layer of the internet was already being developed by others and is arguably more important than HTML itself, calling anyone the inventor of the internet is wrong for such a collaborative effort.

His talk was enjoyable, although very much a case of preaching to the choir – there wasn’t a lot that would really surprise any linux.conf.au attendee. What *was* more interesting than his talk content, is the aftermath….

TBL was in Australia and New Zealand for just over 1 week, where he gave several talks at different venues, including linux.conf.au as part of the “TBL Down Under Tour“. It turns out that the 1 week tour cost the organisers/sponsors around $200,000 in charges for TBL to speak at these events, a figure I personally consider outrageous for someone to charge non-profits for a speaking event.

I can understand high demand speakers charging to ensure that they have comfortable travel arrangements and even to compensate for lost earnings, but even at an expensive consultant’s charge rate of $1,500 per day, that’s no more than $30,000 for a 1 week trip.

I could understand charging a little more if it’s an expensive commercial conference such as $2k per ticket per day corporate affairs, but I would rather have a passionate technologist who comes for the chance to impart ideas and knowledge at a geeky conference, than someone there to make a profit any day –  the $20-40k that Linux Australia contributed would have paid several airfares for some well deserving hackers to come to AU to present.

So whilst I applaud the organisers and particularly Pia Waugh for the efforts spend making this happen, I have to state that I don’t think it was worth it, and seeing the amount TBL charged for this visit to a non-profit entity actually really sours my opinion of the man.

I just hope that seeing a well known figure talking about open data and internet freedom at some of the more public events leads to more positive work in that space in NZ and AU and goes towards making up for this cost.

Outside the conference hall.

Outside the conference hall.

Friday had it’s share of interesting talks:

  • Stewart Smith spoke a bit about SQL databases with focus around MySQL & varieties being used in cloud and hosted environments. Read his latest blog post for some amusing hacks fun to execute on databases.
  • I ended up frequenting a few Linux graphical environment related talks, including David Airlie talking about improvements coming up in the X.org server, as well as Daniel Stone explaining the Wayland project and architecture.
  • Whilst I missed Keith Packard’s talk due to a scheduling clash, he was there heckling during both of the above talks. (Top tip – when presenting at LCAs, if one of the main developers of the software being discussed is in the audience, expect LOTS of heckles). ;-)
  • Francois Marier presented on Persona (developed by Mozilla), a single sign on system for the internet, with a federated decentralised design. Whilst I do have some issues with parts of it’s design, over all it’s pretty awesome and it fixes a lot of problems that plagued other attempts like OpenID. I expect I’ll cover Persona more in a future blog post, since I want to setup a Persona server myself and test it out more, and I’ll detail more about the good and the bad of this proposed solution.

Sadly it turns out Friday is the last day of the conference, so I had to finish it up with the obligatory beer and chat with friends, before we all headed off for another year. ;-)

They're taking the hobbits to Isengard! Or maybe just back to the dorms via the stream.

They’re taking the hobbits to Isengard!

A dodgy looking charactor with a wire running into a large duffle bag.....

Hopefully not a road-side bomber.

The fuel that powers IT

The fuel that powers IT

Incoming!

Incoming!

linux.conf.au: day 4

Another successful day of Linux geeking has passed, this week is going surprisingly quickly…

Some of the days highlights:

  • James Bottomley spoke on the current state of Linux UEFI support and demonstrated the tools and processes to install and manage keys and hashes for the installed software. Would have been interesting to have Matthew Garrett at LCA this year to present his somewhat different solution in comparison.
  • Avi Miller from Oracle did an interesting presentation on a new Linux feature called “Transcendent Memory“, which is a solution to the memory ballooning problems for virtualised environments. Essentially it works by giving the kernel the option to request more memory from another host, which could be the VM host, or even another host entirely connected via 10GigE or Infiniband, and having the kernel request and release memory when required. To make it even more exciting, memory doesn’t have to be just RAM, SSDs are also usable, meaning you could add a couple memory hosts to your Xen (and soon KVM) environments and stack them with RAM and SSD to then be provided to all your other guests as a memory ballooning space. It’s a very cool concept and one I intended to review further in future.
  • To wrap up the day, Michael Schwern presented on the 2038 bug – the problem where 32-bit computers are unable to keep time any further and reset to 1901, due to the limits of a 32-bit time buffer (see wikipedia). Time is something that always appears very simple, yet is extremely complex to do right once you consider timezones and other weirdness like leap years/seconds.
The end of time is here! Always trust announcements by a guy wearing a cardboard and robes.

The end of time is here! Always trust announcements by a guy wearing a cardboard and robes.

The conference presentations finished up with a surprise talk from Simon Hackett and Robert Llewellyn from Red Dwarf,  which was somewhat entertaining, but not highly relevant for me – personally I’d rather have heard more from Simon Hackett on the history and future expectations for the ISP industry in Australia than having them debate their electric cars.

Thursday was the evening of the Penguin Dinner, the (usually) formal dinner held at each LCA, this year rather than the usual sit down 3-course dinner, the conference decided to do a BBQ-style event up at the Observatory on Mount Stromlo.

The Penguin Dinner is always a little pricey at $80, but for a night out, good food, drinks and spending time with friends, it’s usually a fun and enjoyable event. Sadly this year had a few issues that kind of spoilt it, at least for me personally, with some major failings on the food and transport which lead to me spending only 2 hours up the mountain and feeling quite hungry.

At the same time, LCA is a volunteer organised conference and I must thank them for-making the effort, even if it was quite a failure this year – I don’t necessarily know all the behind the scenes factors, although the conflicting/poor communications really didn’t put me in the best mood that night.

Next year there is a professional events coordinator being hired to help with the event, so hopefully this adds value in their experience handling logistics and catering to avoid a repeat of the issue.

On the plus side, for the limited time I spent up the mountain, I got some neat photographs (I *really* need to borrow Lisa’s DSLR rather than using my cellphone for this stuff) and spent some good time discussing life with friends lying on the grass looking at the stars after the sun went down.

Part of the old burnt-out observatory

Part of the old burnt-out observatory

Sun setting along the ridge.

Sun setting along the ridge.

What is it with geeks and blue lights? ;-)

What is it with geeks and blue LEDs? ;-)

The other perk from the penguin dinner was the AWESOME shirts they gave everyone in the conference as a surprise. Lisa took this photo when I got back to Sydney since she loves it [1] so much.

Paaaartay!

Paaaartay!

[1] She hates it.