Tag Archives: opinions

I’m a highly opinionated person and always up for a good debate over something. There are my personal opinions and don’t necessarily reflect that of my company, my clients or other business involvements.

Hard drives can be bad influences on your RAID

RAID is designed to handle the loss of hard disks due to hardware failure and can ensure continual service during such a time. But hard drives are wonderful creatures and instead of dying quickly, they can often prolong their death with bad sectors, slow performance or other nasty issues.

In a RAID array if a disk fails in a clear defined fashion, the RAID array will mark it as failed and move on with it’s life. But if the disk is still functioning at reduced performance, write operations on the array will be slowed down to the speed of the slowest disk, as the write doesn’t return as complete until all disks have completed their operation.

It can be tricky to see gradual performance decreases in I/O performance until it reaches a truly terrible level of performance that it can’t go unnoticed due to impacting services in a clear and obvious fashion.

Thankfully tools like Munin make it much easier to see degrading performance over time. I was recently having I/O performance issues thanks to a failing disk and using Munin was quickly able to see which disk was responsible, as well as seeing the level of impact it was making on my system’s performance.

Got to love that I/O wait time!

Wasting almost 2 cores of CPU due to slow I/O holding up processes.

The CPU usage graph is actually very useful for checking out storage related problems, since it records the time spent with the CPU in an idle state due to waiting for storage to catch up and provide data required for operations.

This alone isn’t indicative of a fault – you could get similar results if you are loading your system with too many I/O intensive tasks and your storage just isn’t fast enough for your needs (Are hard disks ever fast enough?), plus disk encryption always imposed some noticeable amount of I/O wait; but it’s a good first place to look.

All the disks!

All the disks!

The disk latency graph is also extremely valuable and quickly shows the disk responsible. My particular example isn’t idle, since Munin has decided to pickup all my LVM volumes and include them on the graphs which makes it very unreadable.

Looking at the stats it’s easy to see that /dev/sdd is suffering, with an average latency of 288ms and a max peak of 7.06 *seconds*. Marking this disk as failed in the array instantly restored performance and I was then able to replace the disk and rebuild the array, restoring expected performance.

Note that this RAID array is built with consumer grade SATA disks, which are particularly bad for this kind of issue – an enterprise grade SATA disk would have been more likely to fail faster and more definitively, as they are designed primarily for RAID environments where the health of the array is more important than one disk doing everything possible to keep itself going.

In my case I’m using software RAID which makes it easy to see the statistics of each disk, since the controller is acting in a JBOD mode and exposing the disks directly to the OS. Using consumer disks like these could be much more “interesting” with a hardware RAID controller that wouldn’t expose the same amount of information… if using a hardware RAID controller, I’d advise to shell up the cash and use enterprise grade disks designed for RAID arrays or you could have a much more difficult life.

Android, the leading propietary mobile operating system

The Linux kernel has had a long history in the mobile space, with the successes and benefits of the OS in the embedded world transferring across to the smart phone and tablet market once devices evolved to a level requiring (and supporting) powerful multitasking operating systems.

But whilst there had been other Linux-based mobiles before, it wasn’t until Android was first released to the world by Google that Linux began to obtain true mass-maket consumer acceptance.  With over 1 billion devices activated by late 2013, Android is certainly the single most successful mobile Linux distribution ever and possibly even the single largest mobile OS on basis of number of devices sold.

Whilst Open Source and Free Software [By Free Software I mean software that is Libre, ie Free as in Freedom, rather than Free as in Beer] had historically succeeded strongly in the server space, it always suffered limited mass market appeal in the desktop. With the sudden emergence and success of Android, proponents of both Open Source and Free Software camps could enjoy a moment of victory and success. Sure we may not have won the desktop wars, and sure it wasn’t GNU/Linux in the traditional sense, but damnit, we had a Linux kernel in every other consumer device, something worth celebrating!

 

Whilst Android still features the Linux kernel, it differs from a conventional GNU/Linux system, as it doesn’t feature the GNU user space and applications. When building Android, Google took the Open Source Linux kernel but threw out most of the existing user space, instead building a new Apache-licensed user space designed for consumers and interaction via touch interfaces.

For Google themselves, Android was a way to prevent vendors like Microsoft or Apple getting a new monopoly in the mobile world where they could then squeeze Google out and strangle their business in the new emerging market – a world where Microsoft or Apple could dictate what browser or search engine that a user could use would not be in Google’s best financial interests and it was vital to take steps to prevent that from being possible.

The proposition to device vendors is that Android was an answer to reducing their R&D costs to compete with incumbent market players, making their devices more attractive and allowing some collaboration with their peers via means of a common application platform which would attract developers and enable a strong ecosystem, that in turn would make Android phones more attractive for consumers.

For Google and device vendors, this was a win-win relationship and it quickly began to pay off.

 

Yet even as soon as we started consuming the delicious Android desert (with maybe a slightly dubious Google advertising crust we could leave on the side), we found the taste souring with every mouthful. For whilst Google and device vendors brought into the idea of Android the operating system, they never brought into the idea of the Free Software movement which had lead to the software and community that had made this success possible in the first place.

To begin with, unlike the GNU/Linux distributions pre-dating Android which generally fostered collaboration and joint effort around a shared philosophy of working together to make a better system, Android was developed in a closed-room model, with Google and select partners developing new features in private before throwing out completed releases to coincide with new devices. It’s an approach that’s perfectly compliant with Open Source licensing, but not necessarily conducive to building a strong community.

Even the open source nature of the OS was quickly tainted, with device vendors taking Android and instead of evolving the source code as part of a community effort, they added in their own proprietary front ends and variations, shipped devices with locked boot loaders preventing OS customisation and shoved binary drivers and firmware into their device kernels.

This wasn’t the activity of just a few bad vendors either. Even Google’s own popular “Google Nexus” series targeted at developers of both applications and operating system requires proprietary blobs to get hardware such as cellular radios, WiFi, Cameras and GPUs to function. [Depending whom you ask, this is a violation of the Linux kernel’s GPLv2 license, but there is disagreement amongst kernel developers and the fears that a ban on kernel proprietary drivers will just lead to vendors moving the proprietary blobs to user space, a legally valid but still ethically dubious approach.]

Google’s main maintainer for AOSP recently departed Google over frustrations getting Qualcomm to release drivers for the 2013 revision of the popular Nexus 7 tablet, which illustrates the hurdles that developers face when getting even just the binaries from vendors.

Despite all these road blocks thrown up, a strong developer community has still managed to form around hacking on the Android source code, with particular credit to Cyanogenmod a well polished and very popular enhanced distribution of Android, Replicant which seeks to build a purely free Android OS replacing binary blobs along the way, and FDroid a popular alternative to the “Google Play” application store offering only Free Software licensed applications for download.

It’s still not perfect and there’s a lot of work left to do – projects like Cyanogenmod and Replicant still tend to need many proprietary modules if you want to make full use of the features of your device. The community is working on fixing these short comings, but it’s always much more frustrating having to play catch up to vendors, rather than working collaboratively with them.

But whilst this community effort can resolve the issue of proprietary drivers and applications and lead us to a proper Free Software Android, there is a much more tricky issue coming up which could cause far greater headaches.
In order to resolve the issue of Android version fragmentation amongst vendors causing challenges for application developers, Google has been introducing new APIs inside a package called “Google Play Services”, which is a proprietary library distributed only via the Google Play application store.

Any application that is reliant on this new library (not to mention over existing proprietary components such as Google Cloud Messaging used for push notifications) will be unable to run on pure Free Software devices that are stripped of non-free components. And whilst at the immediate moment the features offered by this API are mostly around using specific Google cloud-based APIs and features which are non-free by their very nature, there’s nothing preventing more and more features being included in this API in future, reducing the scope of applications that will run on a Free Software Android.

If Google Play Services proves to be a successful way for Google to enforce consistency and conformity on their platform to tackle the fragmentation issues they face, it’s not inconceivable that they’ll push more and more library functions into proprietary layers distributed via the Play Store like this.

 

But if Google chooses to change Android in this way, I feel that it will be inappropriate to continuing calling Android an Open Source or Free Software operating system. Instead it will be better described as a proprietary operating system with an open core – in similar fashion to that of Apple’s MacOS.

Such an evolution could lead to two distinct forks of Android being created:

  1. Propietary/Android, the version identified by the public, offered by Google and their associated vendors, a polished experience but with increasingly reduced user and developer freedoms.
  2. Free/Android, the community variations with it’s own application ecosystem that diverges away from Propietary/Android as more and more applications refuse to run on it due to Free/Android lacking libraries like Google Play Services.

Some readers will ponder why having some proprietary components is such a concern – who really wants to hack around with drivers or application compatibility APIs? Generally they’re not the most exciting part of computers [subjectively speaking of course] and on some level I can understand this mindset.

But proprietary software chunks are more than just being an annoyance to developers who want to tinker. Proprietary software makes your device opaque, obscuring what the software is doing, how it works and how it can be (ab)used.

The Google Play application has the capability to install content on your phone, a feature often used by users to install applications to their device from their browser. But does the source code of the the Google Play application ensure that it can never happen without your awareness? There’s already due cause to distrust the close association between companies like Google and the NSA, without the ability to see inside the software’s source code, you can’t be sure of it’s capabilities.

Building applications around proprietary APIs like Google Play Services removes the freedom of a user to decided to replace calls to proprietary systems to free ones. It may be preferable to use a Free Software mapping API rather than Google’s privacy lacking Maps offering for example, but without the source code, it’s not possible to make this change.

Even something as innocent as a driver or firmware of hardware such as the GSM modem could be turned into a weapon by a powerful adversary, by taking advantage of backdoors in the firmware to deliver malware to spy on an individual – whether for the “right” reasons or not, depends on your moral views and whom is doing the spying at the time.

Admittedly a pessimistic view, but I’ve laid out my personal justifications for taking this approach before and believe we need to look at how this technology could potentially (hopefully never) be used against individuals for immoral reasons.

 

I think Android illustrates the differences between Open Source and Free Software extremely well. Whilst Android is licensed under an Open Source license, it doesn’t have the same philosophy of Free Software.

It’s source code is open because it provided Google with a commercial advantage, not because Google believe that user freedom is important. Google and their partners have no qualms about making future applications and/or features proprietary, even at the detriment to developers and users by restricting their freedoms to understand and modify the software in their device.

Richard M. Stallman (RMS), the founder of the Free Software movement, wrote about the differences in Free Software vs Open Source and tells how whilst these two different ideologies have overlapping goals, at times they also differ. In some ways the terminology Open Source can be dangerous, as it lets us lose sight of the real reasons why software needs to be Free for Freedom’s sake above all.

 

Interestingly, despite how strongly I feel about Free Software,  I’ve found it somewhat personally easy to ignore concerns of proprietary software on mobiles for a prolonged period of time. In many ways, I see my mobile as  just a tool and not a serious “real” computer like my GNU/Linux laptop where I conduct most of my digital activities.  It’s possibly a result of my historical experiences with the devices, starting off using mobiles when they were just phones only and having had them slowly gain more capabilities around me, but always being seen as “phones” rather than “pocket computers”.

I’m certainly a digital native, a child of the internet generation separated from my parent’s generation by being the first to really grow up with widely available internet connectivity and computers. But to me, computers are still laptops and servers, despite having a good understanding of the mobile space and using mobile devices every day to possibly excessive amounts.

Yet for the current and next generation growing up, mobile phones and tablets are *the* computer that will define their learning experiences and interaction with the world – they may very well end up never owning a conventional computer, for the old guard of Windows, Linux and PC are gone, replaced with iOS, Android and handhelds.

It’s clear that mobiles operating systems are the platform of the future, it’s time we consider them equals with our conventional operating systems and impose the same strict demands for privacy and freedom that we have grown to expect onto the mobile space. I know that personally I don’t trust my Android mobile even one tenth as much as I trust my GNU/Linux laptop and this is unacceptable when my phone already has access to my files, my emails, my inner most private communications with others and who knows what else.

 

So the question is, how do we get from the Kinda-Propietary/Android we have now, to the Free/Android that we need?

I know there are some who will take a purist approach of running only pure Free Software Android and ignoring any applications or features that don’t run on it as-is. Unfortunately taking this approach will inevitably lead to long term discrepancies between the mass market Android OS and the Free Software purists pulling the OS feature set in different directions.

A true purist risks becoming a second class citizen – we are already at the stage where not being able to run popular applications can seriously restrict your ability to take part in our world – consider the difficulties of not being able to load applications needed to use public transport, do banking (online or NFC banking) or to communicate with friends due to all these applications requiring a freedom impacting proprietary layer.

It will be difficult to encourage users and application developers to use a Free Software Android build if they discover their existing collection of applications that rely on various proprietary APIs and library features no longer work, so we need to be somewhat pragmatic and make it easier for them to take up Free Software and still run proprietary applications on top of a free base, until such time as free alternatives arise for their applications.

I think the solution is a collection of three different (but all vital) efforts:

  1. Firstly, to support development of community Android distributions, such as Cyanogenmod and Replicant, something which has been successful so far, it’s clear that Google isn’t interested in working as equals with the community, so having a strong independent community is important for grass-roots innovation.
  2. Secondly to support the replacement of binary blobs in the core Android OS, such as the work that the Replicant project has started with writing Free Software drivers for hardware.
  3. Thirdly (and not at all least) we need to make it easy to provide the same functionality in Free/Android as Proprietary/Android by re-implementing closed source applications and libraries such as Google Play application store, Google Cloud Messaging (Push notifications) and the Google Play Services library/API.

Whether we like it or not, Google’s version of Android will be the platform than the majority of developers target long term. It doesn’t suit all developers, but it has suited most Free (as in beer and/or Freedom) and paid application developers for Android well enough for a long period already that I don’t see it being easy to de-rail that momentum.

If we can re-implement Google’s proprietary layers to a level sufficient for maintaining compatibility with the majority of these applications, it opens up some interesting possibilities. A Free/Android mobile with a Free/PlayServices API layer developed using the documented API calls published by Google is entirely possible and would allow users to run a Free/Android mobile and still maintain support for the majority of public applications being released for the Android platform, even if they use more and more proprietary API features.

Such a compatibility layer will enable users to run applications on their own terms – a user might decide to only run Free as in Freedom software, or they could decide that running proprietary software is OK sometimes -and that’s an acceptable choice, but the user is the one that should be making it, not Google or their device vendor.

Potentially we could take this idea a step further and re-implement features like contact and setting synchronisation against a Free Software server that technically capable users can choose to setup on their own servers, giving them the benefits of cloud-type technologies without loss of freedoms and privacy that takes place if using the Google proprietary features.

 

I’m not alone in these concerns – neither RMS or the Free Software Foundation (FSF) have been idle on this issue – RMS has an excellent write up on the freedom of Android here, and on a more mainstream level, the FSF is running campaigns promoting freeing Android phones and encouraging efforts to keep the platform Free as in Freedom.

I’m currently taking steps to move my Android Mobile off various proprietary dependencies to Free Software alternatives – it’s going to be slow and gradual and it will take time to determine replacements for various applications and libraries.

I haven’t done much in the way of Android application development, but I’m not afraid to pick up some Java if that’s what it takes to fill in a few gaps to get there – and if it means reverse engineering some features like Google Play Services, I’ll go down that path if need be.

Because Free Software computing is vital for privacy, vital for security and vital for a free society itself. And if the cost is a few weekends hacking at code, it’s a price well worth paying.

Attack vectors for personal computers

The sad thing is, I ran out of space to keep adding arrows.

Click image to enlarge

I’ve put together a (very) simplified overview of various attack vectors for an end user’s personal computer. For a determined attacker with the right resources, all of the above is potentially possible, although whether an attacker would go to this much effort, would depend on the value of the data you have vs the cost of obtaining it.

By far the biggest risk is you, the end user – strong, unique passwords and following practices such as disk encryption and not installing software from questionable vendors is the biggest protection from a malicious attacker and will protect against most of the common attacks including physical theft and remote attacks via the internet.

Where is gets nasty is when you’re up against a more determined attacker who can get hardware access to install keyloggers, can force a software vendor to push a backdoored software patch to your system via an update channel (ever wondered what the US government could make Microsoft distribute via Windows Update for them?), or has the knowledge on how to pull of an advanced attack such as putting your entire OS inside a hypervisor by attacking UEFI itself.

Of course never forget the biggest weakness – beating a user with a wrench until they give up their password is a lot cheaper than developing a sophisticated exploit if someone just wants access to some existing data.

Why SSL is really ISL

Secure transmission of data online is extremely important to avoid attackers intercepting data or claiming to be a site that they are not. To provide this, a technology called SSL/TLS (and commonly seen in the form of https://) was developed to provide security between an end user and a remote system to guarantee no interception or manipulation of data can occur during communications.

SSL is widely used for a huge number of websites ranging from banks, companies, and even this blog, as well as other protocols such as IMAP/SMTP (for Email) and SSH (for remote server shell logins).

As a technology, we generally believe that the cryptography behind SSL is not currently breakable (excluding attacks against some weaker parts of older versions) and that there’s no way of intercepting traffic by breaking the cryptography. (assuming that no organisation has broken it and is staying silent on that fact for now).

However this doesn’t protect users against an attack by the attacker going around SSL and attacking at other weak points outside the cryptography, such as the validation of certificate identity or by avoiding SSL and faking connections for users which is a lot easier than having to break strong cryptography.

 

Beating security measures by avoiding them entirely

It’s generally much easier to beat SSL by simply avoiding it entirely. A technology aware user will always check for the https:// URL and for the lock symbol in their browser, however even us geeks sometimes forget to check when in a hurry.

By intercepting a user’s connection, an attacker can communicate with the secure https:// website, but then replay it to the user in an unencrypted form. This works perfectly and unless the user explicitly checks for the https:// and lock symbol, they’ll never notice the difference.

Someone is going to be eating a lot of instant noodles for the next few years...

If you see this, you’re so, so, fucked. :-(

Anyone on the path of your connection could use this attack, from the person running your company network, a flatmate on your home LAN with a hacked router or any telecommunications company that your traffic passes through.

However it’s not particularly sophisticated attack and a smart user can detect when it’s taking place and be alerted due to a malicious network operator trying to attach them.

 

“Trusted” CAs

Whilst SSL is vendor independent, by itself SSL only provides security of transmission between yourself and a remote system, but it doesn’t provide any validation or guarantee of identity.

In order to verify whom we are actually talking to, we rely on certificate authorities – these are companies which for a fee validate the identify of a user and sign an SSL certificate with their CA. Once signed, as long as your browser trusts the CA, your certificate will be accepted without warning.

Your computer and web browsers have an inbuilt list of hundreds of CAs around the world which you trust to be reliable validators of security – as long as none of these CAs sign a certificate that they shouldn’t, your system remains secure. If any one of these CAs gets broken into or ordered by a government to sign a certificate, then an attacker could fake any certificate they want to and intercept your traffic without you ever knowing.

I sure hope all these companies have the same strong validation processes and morals that I'd have when validating certificates.

I sure hope all these companies have the same strong validation processes and morals that I’d have when validating certificates…

You don’t even need to be a government to exploit this – if you can get access to the end user device that’s being used, you can attack it.

A system administrator in a company could easily install a custom CA into all the desktop computers in the company By doing this, that admin could then generate certificates faking any website they want, with the end user’s computer accepting the certificate without complaint.

The user can check for the lock icon in their browser and feel assured of security, all whilst having their data intercepted in the background by a malicious admin. This is possible due to the admin-level access to the computer, but what about external companies that don’t have access to your computer, such as Internet Service Providers (ISPs) or an organisation like government agencies*? (* assuming they don’t have a backdoor into your computer already).

An ISP’s attack options are limited since they can’t fake a signed certificate without help from a CA. There’s also no incentive to do so, stealing customer data will ensure you don’t remain in business for a particularly long time. However an ISP could be forced to do so by a government legal order and is a prime place for the installation of interception equipment.

A government agency could use their own CA to sign fake certificates for a target they wish to intercept, or force a CA in their jurisdiction to sign a certificate for them with their valid trusted CA if they didn’t want to give anything away by signing certificates under their organisation name. (“https://wikileaks.org signed by US Government” doesn’t look particularly legit to a cautious visitor, but would “https://wikileaks.org signed by Verisign” raise your suspicions?)

If you’re doubting this is possible, keep in mind that your iOS or Android devices already trust several government CAs, and even browsers like Firefox include multiple government CAs.

http://support.apple.com/kb/ht5012

http://support.apple.com/kb/ht5012

Even the open source kids are pwned

It’s getting even easier for a government to intercept if desired –  more and more countries are legislating lawful interception requirements into ISP legalisation, requiring ISPs to intercept a customer’s traffic and provide it to a government authority upon request.

This could easily include an ISP being legally required to route traffic for a particular user(s) through a sealed devices provided by the government which could be performing any manner of attacks, including serving up intercepted SSL certs to users.

 

So I’m fucked, what do I do now?

Having established that by default, your SSL “secured” browser is probably prime for exploit, what is the fix?

Fortunately the trick of redirecting users to unsecured content is getting harder with browsers using methods such as Extended Validation certs to make the security more obvious – but unless you actually check that the site you’re about to login to is secure, all these improvements are meaningless.

Some extensions such as HTTPS Everywhere have limited whitelists of sites that it knows should always be SSL secured, but it’s not a complete list and can’t be relied on 100%. Generally the only true fix for this issue, is user education.

The CA issue is much more complex – one can’t browse the web without certificate authorities being trusted by your browser so you can’t just disable them. There are a couple approaches you could consider and it really depends whether or not you are concerned about company interception or government interception.

If you’re worried about company interception by your employer, the only true safe guard is not using your work computer for anything other than work. I don’t do anything on my work computer other than my job, I have a personal laptop for any of my stuff and in addition to preventing a malicious sysadmin from installing a CA, by using a personal machine also gives me legal protections against an employer reading my personal data on the grounds of it being on a company owned device.

If you’re worried about government interception, the level you go to protect against it depends whether you’re worried about interception in general, or whether you must ensure security to some particular site/system at all cost.

A best-efforts approach would be to use a browser plugin such as Certificate Patrol, which alerts when a certificate on a site you have visited has changed – sometimes it can be legitimate, such as the old one expiring and a new one being registered, or it could be malicious interception taking place. It would give some warning of wide spread interception, but it’s not an infallible approach.

A more robust approach would be to have two different profiles of your browser installed. The first for general web browsing and includes all the standard CAs. The second would be a locked down one for sites where you must have 100% secure transmission.

In the second browser profile, disable ALL the certificate authorities and then install the CAs or the certificates themselves for the servers you trust. Any other certificates, including those signed by normally trusted CAs would be marked as untrusted.

In my case I have my own CA cert for all my servers, so can import this CA into an alternative browser profile and can safely verify that I’m truly taking to my servers directly, as no other CA or cert will ever be accepted.

 

I’m a smart user and did all this, am I safe now?

Great stuff, you’re now somewhat safer, but a determined attacker still has several other approaches they could take to exploit you, by going around SSL:

  1. Malicious software – a hacked version of your browser could fake anything it wants, including accepting an attackers certificate but reporting it as the usual trusted one.
  2. A backdoored operating system allows an attacker to install any software desired. For example, the US Government could legally force Apple, Google or Microsoft to distribute some backdoor malware via their automatic upgrade systems.
  3. Keyboard & screen logging software stealing the output and sending it to the attacker.
  4. Some applications/frameworks are just poorly coded and accept any SSL certificate, regardless of whether or not it’s valid, these applications are prime to exploit.
  5. The remote site could be attacked, so even though your communications to it are secured, someone else could be intercepting the data on the server end.
  6. Many, many more.

 

The best security is awareness of the risks

There’s never going to be a way to secure your system 100%  – but by understanding some of the attack vectors, you can decide what is a risk to you and make the decision about what you do and don’t need to secure against.

You might not care about any of your regular browsing being intercepted by a government agency, in that case, keep browsing happily.

You might be worried about this since you’re in the region of an oppressive regime, or are whistle-blowing government secrets, in which case you may wish to take steps to ensure your SSL connections aren’t being tampered with.

And maybe you decided that you’re totally screwed and resort to communicating by meeting your friends in person in a Faraday cage in the middle of nowhere. It all comes down to the balance of risk and usability you wish to have.

PRISM Break

The EFF has put together a handy website for anyone looking to replace some of their current proprietary/cloud controlled systems with their own components.

You can check our their guide at: http://prism-break.org/

Generally it’s pretty good, although I have concerns around a couple of their recommendations:

  • DuckDuckGo is a hosted proprietary service, so whilst they claim to not track or record searches, it’s entirely possible that they could be legally forced to do so for a particular user/IP address and have a gag order on that. Having said this, it sounds like they’re the type of company that would push back against such requests as much as possible.
  • Moving from Gmail to something like Riseup is just replacing one centralised provider with another, it doesn’t add any additional protection against PRISIM.

As always, the only truly secure (excluding security bugs etc) is one you control entirely. If a leak of your data must be avoided at all costs, you need to be running a server.

Don’t abandon XMPP, your loyal communications friend

Whilst email has always enjoyed a very decentralised approach where users can be expected to be on all manner of different systems and providers, Instant Messaging has not generally enjoyed the same level of success and freedom.

Historically many of my friends used proprietary networks such as MSN Messenger, Yahoo Messenger and Skype. These networks were never particularly good IM networks, rather what made those networks popular at the time was the massive size of their user bases forcing more and more people to join in order to chat with their friends.

This quickly lead to a situation where users would have several different chat clients installed, each with their own unique user interfaces and functionalities in order to communicate with one another.

Being an open standards and open source fan, this has never sat comfortably with me – thankfully in the last 5-10yrs, a new open standard called XMPP (also known as Jabber) has risen up and had wide spread adoption.

500px-XMPP_logo.svg

XMPP brought the same federated decentralised nature that we are used to in email to instant messaging, making it possible for users on different networks to communicate, including users running their own private servers.

Just like with email, discovery of servers is done entirely via DNS and there is no one centralised company, organisation or person with control over the system -each user’s server is able to run independently and talk directly to the destination user’s server.

With XMPP the need to run multiple different chat programs or connect to multiple providers was also eliminated.  For the first time I was able to chat using my own XMPP server (ejabberd) to friends using their own servers, as well as friends who just wanted something “easy” using hosted services like Google Talk which support(ed) XMPP, all from a single client.

Since Google added XMPP into Google Talk, it’s made the XMPP user base even larger thanks to the strong popularity of Gmail creating so many Google Talk users at the same time. With so many of my friends using it, is has been so easy to add them to my contacts and interact with them on their preferred platform, without violating my freedom and losing control over my server ecosystem.

Sadly this is going to change. Having gained enough critical mass, Google is now deciding to violate their “Don’t be evil” company moral and is moving to lock users into their own proprietary ecosystem, by replacing their well established Google Talk product with a new “Hangouts” product which drops XMPP support.

There’s a very good blog write up here on what Google has done and how it’s going to negatively impact users and how Google’s technical reasons are poor excuses, which I would encourage everyone to read.

The scariest issue is the fact that a user upgrading to Hangouts get silently disconnected from being able to communicate with their non-Google XMPP using friends. If you use Google Talk currently and upgrade to Hangouts, you WILL lose the ability to chat with XMPP users, who will just appear as offline and unreachable.

It’s sad that Google has taken this step and I hope long term that they decide as a company that turning away from free protocols was a mistake and make a step back in the right direction.

Meanwhile, there are a few key bits to be aware of:

  1. My recommendation currently is do not upgrade to Hangouts under any circumstance – you may be surprised to find who drops off your chat list, particularly if you have a geeky set of friends on their own domains and servers.
  2. Whilst you can hang onto Google Talk for now, I suspect long term Google will force everyone onto Hangouts. I recommend considering new options long term for when that occurs.
  3. It’s really easy to get started with setting up an XMPP server. Take a look at the powerful ejabberd or something more lightweight like Prosody. Or you could use a free hosted service such as jabber.org for a free XMPP account hosted by a third party.
  4. You can use a range of IM clients for XMPP accounts, consider looking at Pidgin (GNU/Linux & Windows), Adium (MacOS) and Xabber (Android/Linux).
  5. If you don’t already, it’s a very good idea to have your email and IM behind your own domain like “jethrocarr.com”. You can point it at a provider like Google, or your own server and it gives you full control over your online identity for as long as you wish to have it.

I won’t be going down the path of using Hangouts, so if you upgrade, sorry but I won’t chat to you. Please get an XMPP account from one of the many providers online, or set up your own server, something that is generally a worth while exercise and learning experience.

If someone brings out a reliable gateway for XMPP to Hangouts, I may install it, but there’s no guarantee that this will be possible – people have been hoping for a gateway for Skype for years without much luck, so it’s not a safe assumption to have.

Be wary of some providers (Facebook and Outlook.com) which claim XMPP support, but really only support XMPP to chat to *their* users and lack XMPP federation with external servers and networks, which defeats the whole point of a decentralised open network.

If you have an XMPP account and wish to chat with me, add me using my contact details here. Note that I tend to only accept XMPP connections from people I know – if you’re unknown to me but want to get in touch, email is best at first.

Holy Relic of the Server Farm

At work we’ve been using New Relic, a popular software-as-a-service monitoring platform to monitor a number of our servers and applications.

Whilst I’m always hesitant of relying on external providers and prefer an open source solution where possible, the advantages provided by New Relic have been hard to ignore, good enough to drag me away from the old trusty realm of Munin.

Like many conventional monitoring tools (eg Munin), New Relic provides good coverage and monitoring of servers, including useful reports on I/O, networking and processes.

Bro, I'm relaxed as bro.

Bro, I’m relaxed as bro. (server monitoring with New Relic)

However where New Relic really provides value is with it’s monitoring of applications, thanks to a number of agents for various platforms including PHP, Ruby, Python, Java and .NET.

These agents hook into your applications and profile their performance in detail, showing details such as breakdown of latency by layer (DB, language, external, etc), slow DB queries and other detailed traces.

For example, I found that my blog was taking around 1,000ms of processing time in PHP when serving up page content. The VM itself had little load, but WordPress is just not a particularly well oiled application.

Before and after installing W3 Total Cache on my blog.

Before and after installing W3 Total Cache on my blog. Next up is to add Varnish and drop server times even further.

What's my DB up to?

Toss out your crufty DBA, we have a new best friend! (just kidding DBAs, I still love ya)

New Relic will even slip an addition into the client-side content which measures the browser-side performance and experience for users visiting your website or application, allowing you to determine cause of slow page loads.

Generally my issue is too much large content + slow links

Generally my issue is too much large content + slow links

There’s plenty more offered, I haven’t even looked at all the options and features yet myself – best approach is to sign up for a free account and trial it for a while to see if it suits.

New Relic recently added a Mobile Application agent for iOS and Android developers, so it’s also attractive if you’re writing mobile applications and want to check how they’re performing on real user devices in the wild.

 

Installation of the server agent is simply a case of dropping a daemon onto the host (with numerous distribution packages available). The application agents vary depending on language, but are either a case of loading the agent with the application, or bundling a module into your application.

It scales well performance wise, we’ve installed the agent on some of AU’s largest websites with very little performance impact in most cases and the New Relic interface remains fast and responsive.

Only warning I’d make is that the agent uses HTTP by default, rather than HTTPS – whilst the security impact is somewhat limited as the data sent isn’t too confidential, I would really prefer the application use HTTPS-only. (There does appear to be an “enterprise security” mode which forces HTTPS agents only and adds other security options, so do some research if it’s a concern).

 

Pricing is expensive, particularly for the professional account package with the most profiling. Having said that, for a web company where performance is vital, New Relic can quickly pay for itself with reduced developer time spend on issues and fast alerting to performance related issues. Both operations and developers have found it valuable at work, and I’ve personally found this a much more useful tool than our Splunk account.

If you’re only interested in server monitoring you will probably find better value in a traditional Munin setup, unless you value the increased simplicity of configuration and maintenance.

 

Note that New Relic is also not a replacement for alert-monitoring such as Nagios – whilst New Relic can generate alerts for performance issues and other thresholds, my advice is to rely on Nagios for service and resource overload/failure and rely on New Relic monitoring for alerting to abnormal or negative performance trends.

I also found that I still find Awstats very useful – whilst New Relic has some nice browser stats and geography stats, Awstats is more useful for the “how much traffic and data has my website/application done this month” type questions.

It’s not for everyone’s requirements and budget, but I do highly recommend having an initial trial of it, whether you’re running a couple of servers or a massive enterprise.

Marriage Equality

New Zealand has passed legalisation for allowing same-sex marriage this evening! I’m so happy for my many LGBT friends and love the fact that no matter whom any of us love, we can get the same recognition and rights in not just law, but also have society recognise these choices.

I love the fact that my beautiful lady friend can settle down with another beautiful lady friend should she chose to. I love the fact my awesome male friend can find another awesome male to spend life with should he chose. And I love the fact that their relationship does not impact on the meaning of my own conventional relationship with Lisa in any way.

Unleash the Gayroller 2000!

Run conservatives! (credit to theoatmeal.com)

NZ has had civil unions for some time giving same-sex couples similar legal rights, but this removes the last barriers to equality and gives hetrosexual and homosexual relationships the same footing in society.

This may sound like a small thing, but it has huge ramifications – it removes the distinction of same-sex relationships as being different in some negative sense. In the eyes of the law, there is no difference in any marriage no matter what genders the participants are and this will reflect over time in society as well

It’s also solved the same-sex adoption issue, with the legalisation opening the door for the first gay adoptions to take place.

Generally New Zealand is very liberal – our government is almost entirely secular, generally mentioning and promoting religious views in government is the realm of fringe 1 or 2 MP parties, which has really helped with this legalisation as it hasn’t gotten so tangled up with religious arguments.

Whilst there were still a few conservative fucktards disturbed individuals who voted against the bill, the support has been strong across the entire house and across parties.

I’ve personally noted that the younger generations are the most open and comfortable around same-sex relationships, whereas the older generations tend to support their rights in general, but still raise their eyebrows when they see a same-sex couple, a hangover from decades of stigma where they still feel the need to point it out as something unusual.

There’s still some stigma and fights remaining – the law still won’t recognise polygamous marriage and government departments still tend to vary a lot on the gender question (although the new marriage laws allow you to be Partner/Partner rather than Husband/Husband, Wife/Wife or Husband/Wife; and NZ passports now have a X/Other gender option, so it’s getting better.).

There’s also some more understanding needed in society around the fluid nature of sexuality – if I sleep with guys and then settle down with a lady (or vice versa), it’s not a case of going through a phase before setting on straight or gay – there’s a whole range of gray areas in between which are very hard to define, yet I feel that society in general is very quick to label and categorise people into specific categories when the reality is that we’re just undefinable.

But despite a lot of remaining work, I’m extremely happy with this legalisation and happy that I voted for a party that supported it 100% as an undeniable human rights issue. Maybe even had a momentary twinge of patriotism for NZ. :-)

Launch Paywall!

So far my time working at Fairfax Media AU has been pretty much non-stop from day one – whilst the media companies are often thought of as dull and slow moving, the reality is that companies like Fairfax are huge and include a massive range of digital properties and are not afraid to invest in new technologies where there are clear business advantages.

It’s great stuff for me, since I’m working with a really skilled group of people and being given some awesome and challenging projects to keep me occupied.

Since January I’ve been working with the development team to build up the infrastructure needed to run the servers for our new paywall initiative. It’s been a really fun experience working with a group of skilled and forwards thinking group of developers and being able to build infrastructure that is visible by millions of people and at such a key stage in the media business is a very rare opportunity.

Our new paywall has just been launched a few days ago on Sydney Morning Hearld and The Age websites for select international countries (including the US and UK)  before we roll it out world wide later in the year.

Paywalls on SMH and The Age, as seen for select international countries.

Paywalls on SMH and The Age, as seen for select international countries.

Mobile hasn't been neglected, a well polished interface has been provided there too.

Mobile hasn’t been neglected, a well polished interface has been provided there too.

Obviously paywalls are a pretty controversial topic, there’s already heaps of debate online  ranging from acceptance to outright rage for the idea of having to pay for daily news content and plenty of reflection over long term sustainability of Fairfax itself.

I won’t go into too much detail, I have somewhat mixed personal views on the idea, but generally I think Fairfax has created a pretty good trade off between making content casually available and sharable. Rather than a complete block, the paywall is porus and allows a select number of article reads a month, along with liberal social media sharing and reading of links shared by others.

Changing a business model is always hard and there will be those whom are both happy and unhappy with the change, time will tell how well it works. Thankfully my job is about designing and managing the best infrastructure to run the vision that comes down from management rather than trying to see into the future of the consumption and payment of media content, which has got to the be the hardest job around right now….

The future is digital!

Whatever your format of choice, we have your fix!

It’s been an interesting project working with a mix of technologies including both conventional and cloud based virtualisation methods and using tools such as Puppet for fast deployment and expansions of the environment, along with documenting and ensuring it’s an easy environment to support for the future engineers who operate it.

I’ve only been here 6 months and have already picked up a huge range of new skills, I’m sure that the next 6 months will be even more exciting. :-)

Google Play Region Locking

Sadly Android is giving me further reason for dissatisfaction with the discovery that Google Play refuses to correctly detect the current country I’m in and is blocking content from them.

Being citizens of New Zealand and living in Australia, both Lisa and I have a mixture of applications for services in both countries –  a good example being banking apps, where we tend to want to be able to easily manage our financial affairs in both countries.

The problem is, that all my phones are stuck as being in New Zealand, whilst Lisa’s phone has decided to be stuck to AU. This hasn’t impacted me much yet, since the Commbank application isn’t region locked and being treated by Google as being “inside” NZ I can install the ANZ Go Money application from within AU.

However because Lisa’s phone is the other way around and stuck in AU, she’s unable to install both the ANZ Go Money and Vodafone NZ My Account application, thanks to the developers of both applications region locking them to NZ only.

What makes the issue even more bizarre, is that both of Lisa and my Google Accounts were originally created in New Zealand yet are behaving differently.

Thou shalt not pass!

Thou shalt not pass!

There’s a few particularly silly mistakes here. Firstly ANZ shouldn’t be assuming their customers are always in NZ, us Kiwis do like to get around the world and tend to want to continue managing our financial affairs from other countries. The app is clearly one that should be set to a global region.

I suspect they made this mistake to try and avoid confusing AU and NZ ANZ customers with different apps, but surely they could just make it a bit clearer in the header of the app name…

 

Secondly Google… I’m going to ignore the fact that they even offer region locking for applications (urgghhg DRM evil) and going to focus on the fact that the masters of information can’t even detect what fucking country a user is in properly.

I mean it must be so hard to do, when there are at least 4 ways of doing it automatically.

  1. Use the GPS in every single Android phone made.
  2. Use the SIM card to determine the country/region.
  3. Do a GeoIP lookup of the user’s IP address.
  4. Use the validated credit card on the user’s wallet account to determine where the user is located. (Banks tend to be pretty good at validating people’s locations and identity!)

Sure, none of these are guaranteed to be perfect, but using at least 3 of the above and selecting the majority answer would lead to a pretty accurate system. As it is, I have several phones currently powered up around my apartment in AU, none of which are validating as being in AU at all:

Hey guys, I'm pretty sure my OPTUS CONNECTED PHONE is currently located in AU.

Hey guys, I’m pretty sure my OPTUS CONNECTED PHONE is currently located in AU.

Instead there is an option buried somewhere in Google’s maze of applications and settings interfaces which must be setting the basis for deciding the country our phones are in.

But which one?

  • The internet suggests that Google Wallet is the deciding factor for the user’s country. But which option? Home address is set to NZ, phone number is set to NZ – have even added a validated NZ credit card for good measure!
  • Google Mail keeps a phone number, timezone and country setting.. all of which are set to NZ.
  • Google+ keeps it’s own profile information, all of which is set to NZ.
  • I’m sure there’s probably even more!

To make things even more annoying is that I can’t even tell if making a change helps or not. How do I tell if the phone is *reading* the new settings? I’ve done basic stuff like re-sync Google, but also have gone and cleared the Google Play service cache and saved settings on the phone, to no avail.

It’s a bit depressing when even Apple, the poster child for lock down and DRM makes region changing as simple as signing into iTunes with a new country account and installing an application that way.

Restricting and limiting content by geographic regions makes no sense in a modern global world and it’s just sad to see Google continuing to push this outdated model instead of making a stand and adopting 21st century approaches.

If anyone knows how to fix Google to see the correct country it would be appreciated… if you know how to get Google to fix this bullshit and let us install the same applications globally, even better.