Tag Archives: geek

Anything IT related (which is most things I say) :-)

Standards people, use them!

I’ve been driving around in my mighty Toyota Starlet 1997 for about 18 months+ now and have finally gotten tired of only having a radio as my only source of audio.

I can get away with using radio when in the CBD with good alternative stations like Active who don’t have too many ads, but when doing roadtrips often there are large sections with no coverage or only poor quality commerical stations.

So I decided to buy a new stereo and settled on a Sony CDXGT500U stereo – primarily due to it meeting my two requirements in the cheapest formfactor – both an AUX 3.5mm input jack AND a USB socket for taking MP3s (sadly no Ogg or Flac tho).

Being a sucker for DIY I decided to have a go at installing it myself – I didn’t need anything too flash like new speakers or cable runs, just wanted the inputs really. Fortunately the installation of the stereo was pretty easy, but I ran into the good old problem of proprietary connectors/standards used by the different vendors.

 

  1. There’s no single standard for the mounting of devices in the car dash – in the case of this stereo, the mounting brackets supplied aren’t required and instead it bolts directly into the Japanese-style mounts.
  2. Sony doesn’t use a standard for their stereos.
  3. Neither does Toyota use a standard for their cars.

To make it work (without going to the pain of soldering/custom wire wrapping) I had to buy *two* different adapters – once for Sony->ISO and another for Toyota->ISO which cost a good $15 each from retailers.

We all love lots of daisy chained adapters!

 

On the plus side, I now have a new stereo installed, dragging my car out of the 80s and into 2011. It’s also the most expensive thing in the car now, although being a Starlet, it’s hardly a theft magnet.

This Starlet be totally pimped yo!

 

#geekflat cleanout

With the departure of John and the continual annoyance of all my junk piled around the flat, I’m going through a bit clean up and listing everything on trademe.

There’s a mix of general flat stuff as well as far too much computing equipment, including rackmount equipment such as cable management bars, servers, UPSes and more.

Lots of ex-Amberdms kit, ideal for Linux deployments and personal development labs and a nice collection of networking kit.

There’s still more to list over the next couple of weeks, so add my *two* trademe accounts to your favorites to get notifications. (There are separate accounts for Amberdms vs Personal listings).

 

Sony & Identity Theft

By now most people have heard of the Sony Playstation Network getting hacked and around 75 million accounts worth of information being obtained.

Ignoring the whole fact that someone owned Sony so badly and that they’re not even sure if credit card details got exploited, I want to examine the information that is being stored with Sony.

There are three key bits of information obtained from the breach:

  1. Login credentials of PSN users.
  2. User identify information, consisting of phone number, email address and age.
  3. Possibly credit card information.

The last mention is the most important – obviously any credit card breach is bad (also PCI-DSS compliance, WTF Sony?), but Sony isn’t sure if the card DB has been exposed or not at this stage and is making a general just-in-case recommendation.

Login credentials may be an issue depending how smart you are – if you’re one of those people who uses the same login on every site, this is a clear example of why you shouldn’t, and you can now enjoy changing the login details on every single site you use… (how many more provider compromises does it take till you learn this is bad??)

So assuming you didn’t use credit cards and used unique credentials, this limits the exposure to user identity information – this is causing huge outcry in the media, with some great quotes from different countries police stating how this is going to lead to widespread identity theft.

Which raises the following points:

  • Why are bank and other key systems requiring identification so poorly setup that all that you need is name, age and address to obtain?
  • All these details are already available online for anyone with a bit of sense, it’s hard to keep all this stuff private in the days of social networking.
  • What are the penalties for companies not conducting the proper validation and security checks on people signing up to things like loans?

Sure it’s bad that the information got compromised, but let’s consider that most of the identity information is already public.

Birthdates are easy to get with the widespread popularity of social networking, same for addresses which can be found from domain records, social networking, websites and more, along with contact details.

If this information is enough to then take out a loan or a bank account, then I think those providers have some pretty heavy explaining to do – far too many have sloppy validation checks which don’t reflect the realities of the 21st century.

Just last week, I had to “validate” my home address to obtain a driver’s license. All that’s required to prove my identity is some photo ID and a service bill with an address on it.

Faking a bill is hardly complex, most laser printers will make something that’s good enough to pass any regular inspection, it’s a step that is only going to catch out the most clueless of exploiters.

Wake up companies, seriously….

I know that some providers to take precautions, even when this may lead to some customer inconvenience/annoyance.

  • National Bank (NZ) would refuse to tell me anything about my account, unless I rang them from a number that matched their records for my account.
  • Visiting banks in person often requires photo ID, which can be faked, but takes a bit more effort.
  • My approach in business has always been to ensure a customer was emailing/calling from a known account, otherwise we would call back to confirm requests on their recorded number.

Although some of these approaches are becoming less trust worthy…

  • Email accounts are commonly broken into – because of this, if we get unusual requests or password reset requests, we often call back the client to confirm.
  • With the adoption of VoIP technologies, it’s becoming easier to assume someone’s phone number and send/recieve phone calls on their behalf.

Sadly there isn’t really a truly valid fix, there’s no identification that can be issued that can truly validate people’s identity and secret words or passwords are usually weakened by the fact that humans suck and choose terrible words or reuse them often.

I think the best fix is simply making sure service providers validate information such as ensuring customers have their last invoice & account number before making changes and that financial institutions or credit agencies follow strict security procedures such as photo identification.

Android VPN Rage

Having obtained a shiny new Nexus S to replace my aging HTC Magic, I’ve been spending the last few days setting it up as I want it – favorite apps, settings, email, etc.

The setup is a little more complex for me, since I run most of my services behind a secure internal VPN – this includes email, SIP and other services.

 

On my HTC Magic, I ran OpenVPN which was included in Cynogenmod – this is ideal, since I run OpenVPN elsewhere on all my laptops and servers and it’s a very reliable, robust VPN solution.

With the Nexus S, I want to stick to stock firmware, but this means I only have the options of a PPTP or IPsec/L2TP VPN solution, both of which I consider to be very unpleasant solutions.

I ended up setting up IPsec (OpenSwan) + L2TP (xl2tp + ppp) and got this to work with my Android phone to provide VPN connectivity. For simplicity, I configured the tunnel to act as a default route for all traffic.

 

Some instant deal breakers I’ve discovered:

  1. Android won’t remember the VPN user password – I can fix this for myself by potentially moving to certificates, but this is a deal breaker for my work VPN with it’s lovely 32-char password as mandated by infrastructure team.
  2. Android disconnects from the VPN when changing networks – eg from 3G to wifi….. and won’t automatically reconnect.
  3. I’m unable to get the VPN to stand up on my internal RFC 1918 wifi range, for some reason the VPN establishes and then drops, yet works fine over 3G to the same server.

 

I love Android and I suspect many other platforms won’t be much better, but this really is a bit shit – I can only see a few options:

  1. Get OpenVPN modules onto my phone and setup OpenVPN tunnels for the stock firmware – for this, I will need to root the device, compile the Nexus kernel with tun module support, copy onto the phone and then install one of the UIs for managing the VPN.
  2. Switch to Cynogenmod to gain these features, at the cost of the stability of using the stable releases from Google/Samsung.
  3. Re-compile the source released by Samsung and apply the patches I want for OpenVPN support in the GUI from Cynogenmod.
  4. Re-compile the source released by Samsung and apply patches to the VPN controls in Android to fix VPN handling properly. Although this still doesn’t fix the fact that IPsec is a bit shit in general.

 

All of these are somewhat time intensive activities as well as being way beyond the level of a normal user, or even most technical users for that matter.

I’m wondering if option 3 is going to be the best from a learning curve and control perspective, but I might end up doing 1 or 2 just to get the thing up and running so I can start using it properly.

It’s very frustrating, since there’s some cool stuff I can now do on Android 2.3, like native SIP support that I just need to get the VPN online for first. :-(

I hate Tuesday

Today has been a trial of frustrations and annoyances…. I love IT completely, but sometimes even I have a bad day.

In summary, my day:

  • Personal server has crashed 2x today with no error messages or displayed panics. This is a pretty big deal, since it’s a modern box, runs about 25 of my development virtual machines and is encrypted making a PITA to boot back up, not to mention I use it daily for development and informational purposes.
  • That server also runs the #geekflat network which meant calls from flatmates begging for precious internets.
  • After waiting weeks (months?) for a NIC to be added to a customer server, I discovered that the engineer was trying to install a PCIe card into a totally incompatible PCI slot.
  • A complex script for processing files at a customer site has been broken after the file format changed unexpectedly and needs to be fixed.
  • I found a bug in my perfect code that gave me some very frustrating headaches and which I’ll have to fix.
  • A customer I did support work for a year ago has a number of general desktop issues, claims these are a fault and demanding I fix it seeing as I was the one who installed anti-virus. O_o
  • I got called several times by people for questions that could have been answered by themselves.

All in all, a very frustrating and annoying day. I just hope that tomorrow is better. :-/

The biggest headache is really the problems with the server instability – I rely on that server for a lot of services and having a fault that reports no specific error or message is immensly frustrating.

At this stage, I’m being to suspect some hardware – it could be PSU/CPU/RAM/MB fault which is causing inconsistant stability, but these sorts of issues are extremely difficult to try and trace down, there is nothing in the logs at this stage to indicate.

I might consider switching UPS and maybe PSUs if I need to and see if that resolves the issue – although it’s very difficult to tell since the last time this server had any stability problems was in Jan…

Arhghgh!

Day 23 – Post a review of an application that you use

This late post is part of my 30 days of geek challenge.

I figured it would be a bit too naracistic to review my own software and a bit boring to review some of my ever day applications, so instead I’m going to do a post about a rather geeky application – KVM virtualisation.

 

About Virtualisation

For those unfamiliar with virtualisation (hi Lisa <3), it’s a technology that allows one physical computer to run multiple virtual computers – with computers getting more and more powerful compared to relatively stable workloads, virtualisation allows us to make much better use of system resources.

I’ve been using virtualisation on Linux since RHEL 5 first shipped with Xen support – this allowed me to transform a single server into multiple speedy machines and I haven’t looked back since – being able to condense 84U of rackmount servers down into a big black tower in my bedroom is a pretty awesome ability. :-)

 

Background – Xen, KVM

I’ve been using Xen in production for a couple years now, whilst it’s been pretty good, there have also been a large number of quite serious bugs at times – combined with the lack of upstream kernel support, it’s given Xen a bit of a bad taste.

Recently I built a new KVM server at home running RHEL 6 to replace my data center, which was costing me too much in power and space. I chose to dump Xen and switch to KVM, which is included in the upstream Linux kernel and is a much smaller simpler code base, since KVM relies on the hardware virtualisation capabilities of the CPU rather than software emulation or paravirtualisation.

In short, KVM is pretty speedy since it’s not emulating much, instead giving the CPU the hardwork. You can then combine paravirtualisation for things like network and storage to boost performance even further.

 

My Platform

I ended up building my KVM server on RHEL 6 Beta 2 (before it was released) and am currently running around 25 virtual machines on it with stable experiences.

Neither the server or guests have needed restarts after running for a couple months without interruption and on a whole, KVM seems a lot more stable and bug free than Xen on RHEL 5 ever was for me. **

(** I say Xen on RHEL 5, since I believe that Xen has advanced a lot since XenSource was snapshotted for RHEL 5, so it may be unfair to compare RHEL 5 Xen against KVM, a more accurate test would be current Xen releases against KVM).

 

VM Supend to Disk

VM suspend to disk is somewhat impressive, I had to take the host down to install a secondary NIC (curse you lack of PCI hotswap!) and KVM suspended all the virtual machines to disk and resumed them on reboot.

This saves you from needing to reboot all your virtual systems, although there are some limitations:

  • If your I/O system isn’t great, it may actually take longer to write the RAM of each VM to disk than it would take to simply reboot the VMS. Make sure you’re using the fastest disks possible for this.
  • If you have a lot of RAM (eg 16GB like me) and forget to make your filesystem on the host OS big enough to cope…..
  • You can’t apply kernel updates to all your VMs in one go by simply rebooting the host OS, you need to restart each VM that requires the update.

In my tests it performed nicely, out of 25 running VMs, only one experienced an issue, which was a crashed NTP process, quickly identified by Nagios and restarted manually.

 

I/O Performance

I/O performance is always interesting with virtualised systems. Some products, typically desktop end user focused virtualisation solutions, will just store the virtual servers as files on the local filesystem.

This isn’t quite so ideal for a server where performance and low overhead is key – by storing a file system ontop of another filesystem, you are adding much more overhead to the block layer which will translate into decreased performance, not so much around raw read/write, but around seek performance (in my tests anyway).

Secondly, if you are running a fully emulated guest, KVM has to emulate virtual IDE disks, which really impacts performance, since doing I/O consumes much more CPU. If your guest OS supports it, paravirtualised drivers will make a huge improvement to performance.

I’m running KVM guests inside Linux logical volumes, ontop of an encrypted block device underneath (which does impact performance a lot) however I did manage to obtain some interesting statistics showing the performance of paravirtualisation vs IDE emulation.

View KVM IDE Emulation vs Paravirtualisation Results

They show noticeable improvement in the paravirtualised disk, especially around seek times… of interest, at the time of the tests, the other server workloads were idle, so the CPU was mostly free for I/O.

I suspect if I were to run the tests again on a CPU occupied server, paravirtualisation’s advantages would become even more apparent, since IDE emulation will be very susceptible to CPU load.

 

The above tests were run on a host server running RHEL 6 kernel 2.6.32-71.14.1.el6.x86_64 ontop of an encrypted RAID 6 LVM volume, with 16GB RAM, Phenon II Quad Core and SATA disks.

In both tests, the guest was a KVM virtual machine running CentOS 5.5 with kernel 2.6.18-194.32.1.el5.x86_64 and 256MB RAM – so not much memory for disk caching – to a 30GB ext3 partition that was cleanly formatted between tests.

Bonnie++ 1.03e was used with CLI options of -n 512 and -s 1024.

Note that I don’t have perfect guest to host I/O comparison test results, but similar tests run against a RAID 5 array on the same server suggests that may be around a 10% performance impact with KVM paravirtualisation which is pretty hard to notice.


Problems

I’ve had some issues with stability which I believe I traced to one of the earlier beta kernels with RHEL 6, since upgrading to 2.6.32-71.14.1.el6.x86_64 the server has been solid, even with large virtual network transfers.

In the past when I/O was struggling (mostly before I had upgraded to paravirtualised disk) I experienced some strange networking issues, as per the post here and identified KVM limitations around the I/O resource allocation space.

Other than the above, I haven’t experienced many other issues with the host and future testing and configuration is ongoing  – I should be blogging a lot of Xen to KVM migration notes in the near future and will be testing CentOS 6 more throughly once released, maybe some other distributions as well.

Day 22 – Release some software under an open source license that you haven’t released before.

This late post is part of my 30 days of geek challenge.

I’ve released a bit of software before under open source licenses – originally mostly scripts and various utilities, before moving on to starting my own open source company (Amberdms Ltd) which resulted in various large applications, such as the Amberdms Billing System and centralised authentication components like LDAPAuthManager.

The other day I released my o4send application, which is a utility for sending bluetooth messages to any phones supporting OPP and today I pushed a new release of LDAPAuthManager (version 1.2.0) out to the project tracker.

 

I haven’t talked about LDAPAuthManager much before – it’s a useful web-based application that I developed for several customers that makes LDAP user and group management easy for anyone to use without needing to understand the pain that is LDAP.

It’s been extended to provide optional radius attribute support, for setting additional values on a per-user or per-group, making LDAPAuthManager part of a wider centralised authentication solution.

 

For other open source goodness, all my current open source components developed by Amberdms can be found on our Indefero project tracker at www.amberdms.com/projects/.

There’s a lot that I have yet to release – releasing means I need to validate the documentation, package, test and then upload so I can be sure that everyone gets the desired experience with the source, so it can be tricky to find the time sometimes :-/

Introducing o4send

Awhile ago, Amberdms was contracted to develop an application for sending messages to bluetooth enabled mobile phones for the NZ world expo.

Essentially the idea was that people would visit the expo, receive a file on their mobiles and receive some awesome content about New Zealand. The cool thing about this was that you didn’t need to be paired, any phone with bluetooth active would get this message.

Apparently this worked quite nicely, although I’m not convinced that OPP will be much use for the future, with the two major smartphone platforms (Android and iPhone/iOS) not providing support for it – we found that it worked best with Nokia Symbian phones.

To make this work, I wrote a perl script and coupled it with a CSV or MySQL database backend to track the connections and file distributions – I bundled this into a little application called “o4send” which I’ve now released the source publicly.

You can check out the source and download the application at the Amberdms project tracker at: https://www.amberdms.com/projects/p/oss-o4send/

Take care with this application, it can talk to a lot of mobile phones and I’m not sure of the legality of sending unsolicited messages to bluetooth devices – but I figured this source might be useful to somebody oneday for a project – or at the very least, a “hey that’s cool” moment.

30 days of geek takes off?

Readers who have been around for a little while may recall my 30 days of geek blogging challenge, which I sadly ran out of time to complete the last few questions.

Recently @CyrisXD has taken up the idea and has been promoting it to get a whole bunch of other geeks blogging and talking about it, which is pretty awesome. He has a list of people doing the challenge, starting up on the 1st of April on his website at eguru.co.nz and there seems to be a lot of buzz around it.

It’s pretty awesome to see it take off and it would be shame if I don’t complete it myself, so I’m going to start making a post a day to complete the 30 days of geek challenge myself. :-)

As a side note, I’m also making some effort to go back and tag all the articles on this blog better – I have a few categories, but there’s lots more content that tends to get hidden and hopefully tagging it will make it more accessible to casual readers, so I’ll be doing this over the next week or so.

DHCP, I/O and other virtualisation fun with KVM

I recently shifted from having two huge server racks down to having a single speedy home server running KVM virtual machines, with the intent of packaging all my servers – experimental, development, staging, etc, into a single reliable system which will reduce power and maintenance costs.

As part of this change, I went from having dedicated DHCP & DNS servers to having everything located onto the KVM host.

The design I’ve used, has the host OS running with minimal services – the host just runs KVM, OpenVPN, DHCP and a DNS caching nameserver – all other services run as guest VMs, with a virtual network for the guests and host to communicate over.

Guests run as DHCP clients – this makes it easy to assign or adjust addressing if needed and get their information from the host OS.

However this does mean you can’t get away with hammering the host too badly – for example, running an I/O and network intensive backup can cause some interesting problems when you also need the host for services, such as DHCP.

Take a look at the following log messages from a mostly idle VM – these were taken whilst another VM on the server was running a bonnie++ process to test performance:

Mar  6 10:18:06 virtguest dhclient: 5 bad udp checksums in 5 packets
Mar  6 10:18:27 virtguest dhclient: DHCPREQUEST on eth0 to 10.8.12.1 port 67
Mar  6 10:18:45 virtguest dhclient: DHCPREQUEST on eth0 to 255.255.255.255 port 67
Mar  6 10:19:00 virtguest dhclient: DHCPREQUEST on eth0 to 255.255.255.255 port 67
Mar  6 10:19:07 virtguest dhclient: DHCPREQUEST on eth0 to 255.255.255.255 port 67
Mar  6 10:19:15 virtguest dhclient: DHCPREQUEST on eth0 to 255.255.255.255 port 67
Mar  6 10:19:15 virtguest dhclient: 5 bad udp checksums in 5 packets

That’s some messed up stuff – what you’re seeing is that the guest VM is trying to renew the DHCP address with the host server – but the host is so sluggish with having to run the I/O intensive virtual machine that is actually corrupting or dropping the UDP packets, preventing the guest VM from renewing it’s address.

This of course raises the most important question: What happens if the guest can’t renew it’s IP address?


In this case, the Linux/CentOS 5 guest VM actually completely lost it’s IP address after a long period of DHCPREQUEST attempts, fell off the network entirely and caused my phone to go nuts with Nagios alerts.

Now of course in any sane production environment, nobody would be running a bonnie++ processes on a VM on an active server – however there’s some pretty key points still made here:

  • The isolation is a lie: Guests are only *somewhat* isolated from one another – one guest can still mess with another and effectively denial-of-service attack the other VMs by utilising all the available resources.
  • Guests can be jerks: Organisations running KVM (or some other systems) with untrusted guest VMs should carefully consider how they are going to monitor and protect the service from users running crazily resource intensive processes. (after all, there will be someone who wants to bonnie++ test their new VM simply for the lols).
  • cgroups to the rescue? Linux cgroups does have an I/O controller (blkio-cgroup) although whilst this controls read/write flow, it won’t restrict seeks which can also badly impact spinning rust based servers.
  • WTF DHCP? The approach of the guests simply dropping their DHCP address after losing contact with the DHCP server is a pretty bad design limitation – if the DHCP server is unreachable, it should keep the original address (of course if the “physical” ethernet connection dropped, that would be a different situation, and it should drop it’s address to match).
  • Also: I wonder what OSes/distributions have the above behavior?

I’m currenting running a number of bonnie++ tests on my KVM server and will have a blog post in the near future detailing these findings in more detail, I’m also planning to look into cgroups and other resource control and limiting functions and will report back on how these fare when you have guest VMs running heavy processes.

Overall it made my weekend of geekery that bit more exciting. :-D