Tag Archives: geek

Anything IT related (which is most things I say) :-)

acpid trickiness

Ran into an issue last night with one of my KVM VMs not registering a shutdown command from the host server.

This typically happens because the guest isn’t listening (or is configured to ignore) ACPI power “button” presses, so the guest doesn’t get told that it should shutdown.

In the case of my CentOS (RHEL) 5 VM, the acpid daemon wasn’t installed/running so the ACPI events were being ignored and the VM would just stay running. :-(

To install, start and configure to run at boot:

# yum install -y acpid
# /etc/init.d/acpid start
# chkconfig --level 345 acpid on

If acpid wasn’t originally running, it appears that HAL daemon can grab control of the /proc/acpi/event file and you may end up with the following error upon starting acpid:

Starting acpi daemon: acpid: can't open /proc/acpi/event: Device or resource bus

The reason can quickly be established with a ps aux:

[root@basestar ~]# ps aux | grep acpi
root        17  0.0  0.0      0     0 ?        S<   03:16   0:00 [kacpid]
68        2121  0.0  0.3   2108   812 ?        S    03:18   0:00 hald-addon-acpi: listening on acpi kernel interface /proc/acpi/event
root      3916  0.0  0.2   5136   704 pts/0    S+   03:24   0:00 grep acpi

Turns out HAL grabs the proc file for itself if acpid isn’t running, but if acpid is running, it will talk to acpid to get it’s information. This would self-correct on a reboot, but we can just do:

# /etc/init.d/haldaemon stop
# /etc/init.d/acpid start
# /etc/init.d/haldaemon start

And sorted:

[root@basestar ~]# ps aux | grep acpi
root        17  0.0  0.0      0     0 ?        S<   03:16   0:00 [kacpid]
root      3985  0.0  0.2   1760   544 ?        Ss   03:24   0:00 /usr/sbin/acpid
68        4014  0.0  0.3   2108   808 ?        S    03:24   0:00 hald-addon-acpi: listening on acpid socket /var/run/acpid.socket
root     16500  0.0  0.2   5136   704 pts/0    S+   13:24   0:00 grep acpi

 

A tale of two route controllers

Ever since I built a Linux 3.2.0 kernel for my Debian Stable laptop to take advantage of some of the newer kernel features, I have been experiencing occasional short periods of disconnect/reconnect on the Wi-Fi network.

This wasn’t happening heaps (maybe a couple times a day), but it was starting to get annoying, so I decided to sort it out properly and do a kernel driver and microcode update for my Intel Centrino Wireless-N 1000 card.

The firmware/microcode update was easy enough, simply a case of downloading the latest code from Intel and installing into /lib/firmware/ – the kernel driver does the rest, finding it and loading it into the Wi-Fi card at boot time.

Next step was building a new kernel for my machine, I went through and tuned the module selection very carefully tossing out all the hardware my laptop will never use, as I was getting sick of wasting lots of disk space on the billion+ device modules in Linux these days.

After finding that my initial kernel lacked support for my video card (turns out the Lenovo X201i laptops still use AGP-based i915 cards, I was assuming PCIe) I got a working kernel up and running.

Except that my Wi-Fi stability problem was worse than ever, instead of losing connectivity every few hours, it was now doing so ever few minutes. :-(

The logs weren’t particularly helpful – NetworkManager likes to give reason numbers but I couldn’t easily find a documented explanation of these (but maybe I’m looking in the wrong place).

19:44:36 NetworkManager[1650]: <info> (wlan0): device state change: 8 -> 9 (reason 5)
19:44:36 NetworkManager[1650]: <warn> Activation (wlan0) failed for access point (b201)
19:44:36 NetworkManager[1650]: <warn> Activation (wlan0) failed.
19:44:36 NetworkManager[1650]: <info> (wlan0): device state change: 9 -> 3 (reason 0)
19:44:36 NetworkManager[1650]: <info> (wlan0): deactivating device (reason: 0).
19:44:36 NetworkManager[1650]: <info> (wlan0): canceled DHCP transaction, DHCP client pid 3354
19:44:36 kernel: [  391.070772] wlan0: deauthenticating from 00:0c:42:67:8b:bc by local choice (reason=3)
19:44:36 kernel: [  391.185461] wlan0: moving STA 00:0c:42:67:8b:bc to state 2
19:44:36 kernel: [  391.185466] wlan0: moving STA 00:0c:42:67:8b:bc to state 1
19:44:36 kernel: [  391.185470] wlan0: moving STA 00:0c:42:67:8b:bc to state 0
19:44:36 wpa_supplicant[1682]: CTRL-EVENT-DISCONNECTED - Disconnect event - remove keys
19:44:36 NetworkManager[1650]: <error> [1337240676.376011] [nm-system.c:1229] check_one_route(): (wlan0): \
         error -34 returned from rtnl_route_del(): Netlink Error (errno = Numerical result out of range)
19:44:36 kernel: [  391.233344] cfg80211: Calling CRDA to update world regulatory domain
19:44:36 avahi-daemon[1633]: Withdrawing address record for 192.168.1.11 on wlan0.
19:44:36 avahi-daemon[1633]: Leaving mDNS multicast group on interface wlan0.IPv4 with address 192.168.1.11.
19:44:36 avahi-daemon[1633]: Interface wlan0.IPv4 no longer relevant for mDNS.
19:44:36 avahi-daemon[1633]: Withdrawing address record for 2407:1000:1003:99:226:c7ff:fe66:b822 on wlan0.
19:44:36 avahi-daemon[1633]: Leaving mDNS multicast group on interface wlan0.IPv6 with address 2407:1000:1003:99:226:c7ff:fe66:b822.
19:44:36 NetworkManager[1650]: <info> (wlan0): writing resolv.conf to /sbin/resolvconf
19:44:36 avahi-daemon[1633]: Joining mDNS multicast group on interface wlan0.IPv6 with address fe80::226:c7ff:fe66:b822.
19:44:36 avahi-daemon[1633]: Registering new address record for fe80::226:c7ff:fe66:b822 on wlan0.*.

So I proceeded to debug:

  1. Cursed and wished my 300m spool of Cat6 ethernet wasn’t in Wellington.
  2. Rolled back the microcode update – my initial thought was that the new code was making the card unstable and the result was the card dropping the connection and NetworkManager doing the clean up.
  3. Did a full power down to make sure that the microcode wasn’t remaining active on the card across reboots (had this problem once with a dodgy GPU once).
  4. Verdict: Microcode upgrade was OK, must be something else.
  5. Upgraded NetworkManager from 0.8.1 to 0.8.4 from Debian Backports – 0.8.1 isn’t too recent, was tempted to try 0.9 series but would have required a lot more backporting work.
  6. Verdict: Appears not to be a NetworkManager issue in the 0.8 series – maybe something fixed in 0.9 or later?
  7. Upgraded wpasupplicant from 0.6.10 to 1.0 by manual backport from unstable – the activation error made me consider it might have been a bug with newer kernels & wpasupplicant’s AP negotiation.
  8. Verdict: No change to the issue.
  9. Built a Linux 3.3 kernel with the older less-crashy 3.2 iwlwifi driver to see if it was driver specific, or otherwise-kernel related.
  10. Verdict: Same issue continued to occur, rolling back driver version infact made no change – something about the 3.3 kernel itself was the problem.
  11. Got suspicious about NetworkManager – either it or the kernel had to be at fault, one possibility was some weird API breakage with the age gap between the software versions being used. The kernel is *usually* pretty solid and something like wifi drivers dropping every couple of minutes would be a pretty serious bug to get through, so I looked through the logs to see if I could get anything more useful with NetworkManager’s logs.
  12. Spotted a kernel error “ICMPv6 RA: ndisc_router_discovery() failed to add default route.“. This error tended to occur shortly before any WiFi disconnection occurred, but not immediately so.
  13. Found an entry in Red Hat’s bugzilla.
  14. And then the upstream bug fix from 19th April.

Turns out that the Linux 3.3 kernel and NetworkManager fight over which one is going to control the default route for each router advertised link – the kernel adds one, Network Manager removes and then the kernel gets upset and drops all router advertisements.

In hindsight, I should have spotted it sooner, but I had discarded the RA statement from being related initially as the disconnection often didn’t happen till a minute of two after the log entry occurred – eg:

19:51:40 kernel: [  814.274903] ICMPv6 RA: ndisc_router_discovery() failed to add default route.
19:52:47 NetworkManager[1650]: <info> (wlan0): device state change: 8 -> 9 (reason 5)

What’s interesting about this bug, is that at first reading it explains a loss of IPv6 connectivity perfectly – however it doesn’t explain why IPv4 or the Wi-Fi connection itself was impacted.

The reason this happened, is that NetworkManager was set to have IPv6 as a requirement for that connection to be established – in the event of IPv6 not working, NetworkManager would consider the interface to be down, even if IPv4 was up.

There is a good reason for this, that the developers detailed on their (excellently written) blog, explaining that by having NetworkManager check for IPv6, it allows applications to be written smarter to better understand their level of connectivity.

For users of the NetworkManager 0.9 series, there’s a patch already committed which you can grab here and I would expect the next NetworkManager update will have this fix.

If you’re on the NetworkManager 0.8 series, this patch won’t apply cleanly – I might make some time to go and backport it, but you can workaround it for now by using the Ignore method so that NetworkManager does nothing and leaves it up to the Linux kernel in the background to negotiate IPv6 addressing.

Breaking vs Working Network Manager Settings

Of course if you’re not connecting to any IPv6 capable networks, you don’t have anything to worry about (other than the fact you’re still stuck in the 20th century).

 

Initially I was a bit annoyed at NetworkManager for being so silly as to drop the whole interface when just one of the two networking stacks was broken, however after thinking about it for a bit, it does make some sense as to why it chose that behavior – often most interface issues can be fixed by reconnecting – maybe the AP got rebooted, maybe the laptop just moved between two of them, etc – a reconnect can solve many of these.

But a smarter approach, would be to determine whether network issues are layer 2 or layer 3 – if it’s just a layer 3 issue, then there’s little need to drop the Wi-Fi connection itself, instead attempt to re-establish IPv4 or IPv6 connectivity where appropriate, and if unable to do so, use the notifications to tell the user that “IPv6 connectivity is experiencing a problem, some hosts and services may be unreachable”.

It’s actually something that Windows does semi-OK – it figures out roughly how borked a user’s connection is and then does a balloon popup stating that there’s limited connectivity or IP conflict, or some other sometimes helpful message.

This may be better in newer versions of NetworkManager, I’ll have to have a play with a more recent release and see.

Shared data at last!

New Zealand is always pretty pricey for telecommunications, particular mobile services including data, something I’ve mentioned before when detailing my move from Vodafone to 2degrees mobile.

Whilst data has slowly been getting cheaper, there’s been an emerging problem of people owning multiple 3G capable devices, such as a mobile phone, tablet and a laptop and wanting to get them all online, but not wanting to pay an expensive data plan for each device.

I personally ended up just using tethering on my phone to connect my laptop, even though my laptop has a built-in 3G modem, simply due to the high cost of maybe once-a-week usage of my laptop not being enough to justify having a separate plan, particularly when only needing a few hundred MB at most.

However 2degrees have now just announced new shared data plans, where you pay an extra $5 a month, plus $1 per device to be able to share the data plan from another account (for up to 5 devices).

This is pretty compelling, I can go buy a bunch of prepaid SIMs for my laptop, spare phones, USB 3G stick and be able to use them all with my shared data at a fraction of the cost compared to doing so on Telecom or Vodafone, whom are going to need to up their game if they want to retain multi-device users.

2degrees certainly aren’t sitting still – in the past week they’ve announced both this shared data service as well as their Touch2Pay arrangement with Snapper, to allow the use of smartphones with NFC for payment on the Snapper network (used for Wellington and Auckland buses), future is looking bright for them.

Fixing Blogging

I’m finding an increasing number of friends and people using services like Tumblr or Google Plus as blogging services, or at least as a place to make posts that are more detailed an indepth than typical micro-blogging (aka Twitter/Facebook).

The problem with both these services, is that they deny interaction from external users who aren’t registered with their service.

With traditional blogging platforms such as WordPress, Blogger, or other custom developed blogs, any visitor to the blog could read it and post comment – the interfaces would vary, the ease of posting would vary and the method of validation of posting would vary, but you could 99% of the time still be able to post comments and engage with the author.

This has not been the case with social networks to date – platforms like Twitter or Facebook require a user to be logged in, in order to communicate with others – however this tends to work OK, since they’re mostly used for person-to-person messages and broadcasting, rather than detailed posts you will sent to users outside of those networks (after all 140 char tweets aren’t exactly where you’ll debate things of key meaning).

The real issue starts with half-blog, half-microblog services such as Tumblr and Google Plus, which users have started to use for anything from cat pictures to detailed Linux kernel posts, turning these tools into de-facto blogging platforms, but without the freedom for outsiders to post comments and engage in conversation.

 

Tumblr is one of the worst networks, as it’s very much designed as a glorified replacement for chain email forwards – you post some text or some pictures and all your friends “reblog” your page if they like it and users all pat themselves on their back at how witty and original they all are.

But to make a comment, one must reblog the post, add a comment and have it end up in the pages long list of reblog and like statements at the bottom of the post. And if the original poster wants to comment on that, you’d have to re-blog their blog. :-/

Yo dawg, we heard you like to reblog your reblogging.

The issue is that more people are starting to use it for more than just funny cat pictures and treat it as a replacement to blogging, which makes for a terrible time engaging with anyone. I have friends who use the service to post updates about their lives, but I can’t engage back – makes me feel like some kind of outcast stalker peering through the windows at them.

And even if I was on Tumblr, I’d actually want to be able to comment on things without reblogging them – nobody else cares if Jane had a baby, but I’d like to say “Congrats Jane, you look a lot less fat now the fork()ed process is out” to let my friend know I care.

Considering most Tumblr users are going to use Facebook or Twitter as well, they might as well use the image and short statement posting features of those networks and instead use an actual blog for actual content. Really the fault is due to PEBKAC – users using a bad service in the wrong way.

 

Google Plus is a bit better than Tumblr, in the respect that it actually has expected functionality like posts you can comment on, however it lacks the ability for outsiders to post comments and engage with the author – Google has been pretty persistent with trying to get people to sign up for an account, so it’s to be expected somewhat.

I’ve seen a lot of uptake with Google Plus by developers and geeks, seemingly because they don’t want the commitment of actually using a blog for detailed posts, but want somewhere to post lengthy bits of test.

Linus Torvalds is one particular user whom I might want to follow on Google Plus, but there’s not even RSS if you wanted to get updates on new posts! (To get RSS, you’d have to use external thirdparty services).

Tumblr at least has RSS so I can still use it in my reader like everything else, even if I can’t reply to the author….

Follow Linus! Teenage fanboy Jethro squeee!

And of course with no ability for posting comments by outsiders, I can’t post Linus comments requesting his hand in marriage after merging a kernel bug fix for my laptop. :-(

 

So with all these issues, why are users adopting these services? After all, there are thousands of free blogging services, several well known and very good ones, all better technical options.

I think it’s a combination of issues:

  • Users got overwhelmed by RSS – we followed everything we loved, then got scared by the 10,000 unread posts in our readers – and resolved by simply not opening the reader in fear of the queue waiting for us. The social media style approaches used by Google+, Tumblr and of course Twitter and Facebook focus less on following every single post by users, but rather what’s happening here and now – users don’t feel bad if they miss reading 1,000 posts overnight, they just go on to the next.
  • Users love copying. The MPAA & RIAA love this fact about humans, we love to copy and share stuff with others. Blogging culture tends to frown on this, but Tumblr’s reblogging style of use makes it more acceptable and maintains a credit trail.
  • Less commitment – if I started posting pictures of funny cats or one paragraph posts on this blog, it wouldn’t be doing it justice or up to the level of quality readers expect. However on social network based services, this is OK, there’s no expectation of a certain level of presentation and effort into a post. A funny cat picture followed by the post about you raging about by GNU Hurd will always be better than BSD is acceptable – on a blog, you’d drop the funny cat and be expected to write a well detailed post explaining your reasoning. Another label would be that it’s “more casual”, than conventional blogs.
  • Easier interactions with your readers (at least with Google+) – there’s no standards with blogging for handling notifications to users about changes to your blog or replies to comments. Even WordPress, one of the most popular platforms, doesn’t provided native email notifications to comments.
  • I noticed a major improvement in the level of interaction between myself and my readers after adding Subscribe to Comments Reloaded plugin to this site, using email notifications to users about replies to my blog post. And considering how slack many people are with checking their email, I do wonder how much better it would be if I added support for notification to new posts and comment replies via Twitter or Facebook.
  • Conventional blogs tend to take a bit more effort to post comments, some go overboard with captcha input fields that take 10 attempts or painful comment validation. I’ve tried to keep mine simple with basic fields and dealing with spam using Akismet rather than captcha (which has worked very well for me).

In my opinion the biggest issue is the communication, notification and interaction issue as noted above. I don’t believe we can fix the cultural side of users such as the crap they post or the inability to actually make the effort to read their RSS but we can go someway towards improving the technology to reduce/eliminate some of the pain points, to encourage use of the services.

There have been some attempts to address these issues already:

  • Linkback techniques such as Pingback address the issue of finding out who’s linking to your blog (although I turned this off as I found it really spammy and I get that information out of awstats anyway).
  • RSS handles getting updates of new posts on a polling basis and smarter RSS readers offer better filtering/grouping/etc.
  • Email notifications for blog comments and updates.

But it’s not good enough yet – what I’d actually like to see would be:

  • Improvement of linkback techniques to spam pages less, potentially with the addition of some AI logic to determine whether the linkback was just “check out this cool post!” or some actual useful content that readers of your post would like to read (such as a rebuttal).
  • Smarter RSS readers that act more like social network feeds, to give users who want more of a “live stream” feel what they want.
  • Live commenting technology – not all users have push email, so email notifications kind of suck for many users. A better solution would be to use the existing XMPP standard to send notifications to the user’s XMPP server (anyone using Gmail already has an XMPP service with them and numerous geeks run their own – like me ;-), so the user gets a chat message pop up. If the message format was standardized, it would be possible to have the IM client recognize it was a blog comment reply and to hand off to the installed RSS reader to handle for better UX – or fall back to posting text with a link to the reply for support with any XMPP standard client.
  • (I did see that there is an outdated plugin for XMPP on WordPress  as well as some commercial live-commenting packages that hook into social networks, but I really want a proper open source solution that does everything in one plugin, so there’s a more seemless UX – rather than having 20 checkboxes for which method the user would like notifications via.)
  • Whilst mentioning XMPP, we could even consider replacing RSS with XMPP based push notifications – blog servers sending out a push message when they get an update, rather than readers polling services. Advantage is near-instant update of new posts and potentially less server load of not having thousands of wasted polls when there hasn’t been any update to fetch.
  • Comment reply via notification support. If you send someone an XMPP IM, email, tweet, virtual sheep or whatever to alert to a comment or blog post, they should be able to reply via that native medium and have the blog server interpret, validate and integrate that reply into the page.

My hope is that with these upgrades, blogging platforms will extend themselves to be better placed for holding up against social networking sites, making it easier to have detailed conversations and long running threads with readers and authors.

Moving to a new generation communication platform build around the existing blogging platforms would be as much of an improvement for real time social responsiveness as shifting from email to Twitter and hopefully, the uptake in real time communications will bring more users back to decentralised, open and varied platforms.

I’m tempted to give this a go by building a WordPress plugin to provide unified notifications using XMPP / Email / Social Media, but it’ll depend on time (lol who has that??) and I haven’t done much with WordPress’s codebase before. If you know of something existing, I would certainly be interested to read about it and I’ll be taking a look at options to build upon.

Half a Terrabyte

With companies in Australia offering 1TB plans, us New Zealanders have been getting pretty jealous of only having 100GB plans for the same money.

Except that a couple days ago, my ISP Snap! upgraded everyone’s data allowance by at least 4x…. taking me from a 105GB plan to a 555GB monthly plan at no additional change.

Was a great surprise when I logged into my account, having only previously skimmed the announcement email which went into the mental TL;DR basket.

Good thing I got upgraded,used almost 50GB in 2 days :-/

I’m currently paying around $120-130 per month for half a terrabyte on a naked DSL line and to quote 4chan, it “feels good man”.

Of course it’s only DSL speed – although if I was planning to stick around long term at my apartment, I’d look into VDSL options which are available in some parts of NZ. And the first Ultra Fast Broadband fibre is starting to get laid in NZ so the future is bright.

For me personally, once I have 10mbit or so, the speed becomes less important, what is important is the latency and the amount of data allocated.

I suspect snap!’s move is in response to other mid sized providers providing new plans such as Slingshot’s $90 “unlimited” plan, or Orcon offering up to 1TB for $200 per month on their new Genius service.

TelstraClear is going to need to up their game, for the same price for my half terrabyte, I can only get 100GB of data – although supposedly at 100mbits/10mbits speeds (theoretically, since when they previously offered the 25mbit plan I couldn’t get anywhere near that speed to two different NZ data centers…)

Even with it’s faults, TelstraClear’s cable network still blows away DSL for latency and performance if you’re lucky enough to be in the right regions – they should press this advantage, bring the data caps up to match competition and push the speed advantage to secure customers.

Meanwhile Telecom NZ still offers internet plans with 2GB data caps and for more than I’m paying for 555GB, I’d get only 100GB. Not the great deal around guys… :-/ (yes I realize that includes a phone, that has no value to me as a mid-twenties landline-hating, cellphone-loving individual).

DAViCal 1.0.2 on RHEL 5 & 6

To follow up on my previous post about DAViCal, I’ve built and published RPMs for DAViCal itself and the php-awl dependency.

These are based off provided spec files from the project and tweaked somewhat to be more suitable for RHEL 5 & 6.

 

RHEL 5 & PostgreSQL 8.1 Note

Whilst DAViCal is intended (and for normal operation, does) work with PostgreSQL 8.1 or later, this version is too old for the LDAP authentication module to work, as it uses some PostgreSQL 8.4 version queries.

Fortunately RHEL & CentOS ship with both PostgreSQL 8.1 and PostgreSQL 8.4 now available, so you can fix the solution by installing with:

# yum install davical postgresql84-server

 

RHEL 5 & 6 Installation Instructions

These instructions assume you have confirmed the Amberdms RHEL 5 “amberdms-os” repository at minimum – or you can go and pull the specific RPM files you want – php-awl and davical and add them to your own repository.

Once the repositories are setup, simply install with:

# yum install davical

DAViCal uses PostgreSQL, if this is a new/first PostgreSQL installation, you will need to start and possibly initilise the DB:

# service postgresql start
 /var/lib/pgsql/data is missing. Use "service postgresql initdb" to
 initialize the cluster first.     [FAILED]
# service postgresql initdb
 Initializing database:    [  OK  ]
# service postgresql start
 Starting postgresql service:     [  OK  ]

We need to edit the PostgreSQL user authentication configuration to allow local-only password-less access for the DAViCal application. Optionally you can configure MD5, ident or other desired methods. Add the two lines below to the configuration file, above any existing lines.

# vi /var/lib/pgsql/data/pg_hba.conf

 # trust davical
 local   davical davical_app     trust
 local   davical davical_dba     trust

Restart PostgreSQL for the changes to take effect:

# service postgresql restart

Install the database:

# cd /tmp/
# su postgres -c /usr/share/davical/dba/create-database.sh

 Supported locales updated.
 Updated view: dav_principal.sql applied.
 CalDAV functions updated.
 RRULE functions updated.
 Database permissions updated.
 NOTE
 ====
 *  The password for the 'admin' user has been set to 'EXAMPLE' 

 Thanks for trying DAViCal!  Check in /usr/share/doc/davical/examples/ for
 some configuration examples.  For help, visit #davical on irc.oftc.net.

Adjust the access rules for Apache & restart it:

# vi /etc/httpd/conf.d/davical.conf
# service httpd restart

Test access at http://localhost/davical/or whatever your appropriate server URL is. Any 403 errors probably suggest fault with the /etc/httpd/conf.d/davical.conf IP ACL configuration.

 

RHEL 5 & 6 Upgrade Instructions

Using the packages I have provided, the DAViCal PostgreSQL DB will be updated on any new releases when installing newer RPMs.

This uses the /usr/share/davical/dba/update-davical-database script supplied with DAViCal and shouldn’t require any manual execution or options normally.

 

LDAP Authentication

To configure LDAP authentication, edit the configuration file and define the external authentication settings.

vi /etc/davical/config.php

See the notes in the file about LDAP configuration or consult the quite reliable source of documentation at the DAViCal wiki.

You will also need to have php-ldap installed – it’s not one of the default package dependencies – if it’s missing, you will get this clear message on the login screen:

"drivers_ldap : function ldap_connect not defined, check your php_ldap module"

To install, run:

# yum install php-ldap
# service httpd restart

If authentication still fails to work, try the following

  1. Check the version of PostgreSQL used – must be 8.4 or later, not 8.1, as per my note at the start of this document.
  2. Check Apache error logs (typically /var/log/httpd/error_log)
  3. Check the LDAP server logs

 

 

 

DAViCal, awkward name, great features

A reoccurring theme of this blog is that I love to be able to use open standards and open source for storing and accessing my information – biggest example is of course IMAP for email, but I also use tools such as Mozilla Sync Server for self-reliant synchronization and backup of client device information, without using external cloud providers.

I’ve been a user of Evolution for almost a decade now – sometimes criticized as the “Outlook of Linux”, Evolution provides mail, calendering, contacts and todo lists to the GNOME desktop, with a pretty large but sometimes slightly buggy feature set. For me personally, it’s always done a great job and it’s my key business productivity tool.

I moved all my mail onto an IMAP server years ago, which makes it easy to shift clients if I ever need to – in my case, pretty much just needing to access mail from both my laptop and smart phone.

However this hasn’t been the case for other key data such as calendering and contacts. A few years ago, the open source calendering solutions available weren’t that well developed, and many clients suffered limitations such as read-only functionality.

Thankfully this has been changing – most clients (*glares angrily at Microsoft Outlook*) now support CalDAV and CardDAV quite reliably, which gives us an open standard that works across different programs, platforms and device types.

  • CalDAV is an open standard for the exchange of calendering and task/todo/memo information between a client and a server.
  • CardDAV is an open standard for the exchange of contact/address book information between a client and a server.

These two standards have a number of implementations, both open source and proprietary, of note is Apple Calender Server, which is Apple’s open source implementation; and DAViCal, an open source LAMP based server solution that is becoming quite popular.

I’ve used both solutions – my employer runs an Apple Calender Server after getting fed up at not having free/busy between engineers. Whilst we ended up running a MacOS server, the Linux ports have improved and there are resources for setting it up on a Linux or even BSD host.

Apple Calender Server works reasonably well with Evolution, I never have any issue booking events, however Evolution appears unable to accept or deny meeting requests, forcing me to go to the calender server web-interface which is actually pretty horrific.

I decided I wouldn’t use it for my own personal calendering, even if I went to the effort for porting it onto my Linux servers, it wouldn’t really be the solution I ideally want as it lacks a lot of features and isn’t as easy to configure as other Linux services.

Instead I had a look at DAViCal. It’s a feature-packed calendering and contacts application developed primarily by Morphoss in Wellington NZ, started by Andrew McMillian of ex-Catalyst IT fame.

Despite having an annoyingly tricky name to type (you try typing it for the 100th time at 3am without typoing on the capitalization!!), the software itself appears reliable and worked across a number of devices when I ran tests.

It’s not perfect, I have some issues with the user interface design, which very functional and effective, it’s not that intuitive to a new user, exposing far too many options to them at the beginning, ideally have a simple/advanced option so a user who just wants to add user calenders and do basic stuff can do so, then dig into more detailed ACLs, tokens, shared calenders, etc as needed.

Naturally it’s open source, so I should stop complaining and hack up some code to demonstrate what I think might be better. Maybe if people would stop stealing my car I’d have time to get something done. :-/

Main Screen Turn On! (Maybe some more 1-2-3 clear setup flows here would be nice, the wall of text is kind of offputting for visual people like myself).

Options! All the options!

The web-based interface is only for administration, there isn’t a web-based calender app provided with DAViCal, instead choose any CalDAV client you wish to use with it, whether it’s web based or client-side.

I haven’t given DAViCal’s feature set a full work out yet, at this stage I’ve just setup my personal calendar, contacts and todo list on both Evolution and my Android ICS phone but haven’t touched meeting requests, shared calenders and free/busy information.

Partly my testing is a bit limited since I’m only running Evolution 2.30.3 (Debian Stable) which is a little outdated and it looks like there’s some functionality missing/broken that might not be an issue any more.

On the mobile side, I’m using “aCal”, an open source Android application written by the DAViCal developer, providing a CalDAV calender, todo list and read-only contact/address book synchronization.

This now means I can add, edit and delete calender and task entries on either my Android phone or Linux laptop via evolution and have it propagate to the other device – although unfortunately this is based on polling, rather than push (looks like push is possible theoretically via an extension to the standard, works with iCal).

Tasks & calendar entries in a bright sunny UI

I can also get read-only copies of all my contact information from Evolution synced through to my phone, but sadly there isn’t support for editing contacts on the Android phone just yet.

I did also consider using LDAP for my address book entries, but CardDAV looks like a better designed solution, it’s very rare that I don’t see “LDAP” and “headache” mentioned in the same sentence, and this comes from someone maintaining and supporting LDAP enterprise environments…

Essentially the main problem with LDAP, is that there isn’t an exact standard for address entries, so what works for one client, might not work 100% for another, along with limited selection of decent applications for actually managing LDAP address databases.

Also some clients treat LDAP assuming it’s going to be a million+ record store and expose different UI compared to that of smaller address books which harm user experience (*glares at Evolution*).

aCal & Android ICS address book integration - note the uneditable edit screen on the right, read-only for now :-(

The other main issue with aCal is that it doesn’t sync with the native OS calender program, but instead provides it’s own. Digging through the documentation and mailing list, this appears to be due to the native application lacking support for some of the functionality needed for a proper CalDAV implementation, so a sync solution would leave certain features missing, although I’d still like the option.

Of course these are limitations of aCal, not DAViCal or the standards themselves – there are some other CalDav & CardDAV sync programs available in the Android market under non-open licenses, which you have the option of trying.

The nice thing about using standards is that you can have multiple vendors competing to make the best product/tool for their customer’s needs, not simply using lock-in to maintain/force a customer base. :-)

Overall DAViCal seems really nice and in my testing has been quite reliable – I’m now moving on to more rigorous testing and am in the process of migrating my calender and contacts information into it, once I start using it daily in the real world the true testing begins.

Keen to take a look at what options I have around exposing some information publicly, eg sharing schedule free/busy with friends on different servers.

Leosticks are a gateway drug

At linux.conf.au earlier this year, the guys behind Freetronics gave every attendee a free Leostick Arduino compatible board.

As I predicted at the time, this quickly became the gateway drug – having been given an awesome 8-bit processor that can run off the USB port and can provide any possibility of input/output with both digital and analogue hardware, it was inevitable that I would want to actually acquire some hardware to connect to it!

Beware kids, this is what crack looks like.

My background into actual electronics hasn’t been great, my parents kindly got me a Dick Smith starter kit when I was much younger (remember back in the day when DSE actually sold components! Now I feel old :-/) but I never quite managed to grasp all the concepts and a few attempts since then haven’t been that successful.

Part of the issue for me is I learn by doing and having good resources to refer to, back then it wasn’t so easy, however with internet connectivity and thousands of companies selling components to consumers offering tutorials and circuit design information, it’s never been easier.

Interestingly I found it hard to get a real good “you’re a complete novice with no clue about any of this” guide, but the Arduino learning resources are very good at detailing how their digital circuits work and with a bit of wikipediaing, got me on the right track so far.

Also not having the right tools and components for the job is an issue, so I made a decision to get a proper range of components, tools, hookup wire and some Arduino units to make a few fun projects to learn how to make this stuff work.

I settled on 3 main projects:

  1. Temperature monitoring inside my home server – this is a whitebox machine so doesn’t have too many sensors in good locations, I’d like to be able to monitor some of the major disk bays, fans, motherboard, etc.
  2. Out-of-band serial management and watchdog restart of my home server. This is more complex & ambitious, but all the components are there – with a RS232 to TTY conversion circuit I can read the server’s serial port from the Arduino and use the Arduno and a transistor to control the reset header on the motherboard to power-restart if my slightly flaky CPU crashes again.
  3. Android controlled projects. This is a great one, since I have an abundance of older model Android phones available and would like a project that allows me to improve my C coding (Arduino) and to learn Java/Dalvik (Android). This ticks both boxes. ATM considering adding an Android phone to the Arduino server monitoring solution, or maybe hooking it into my car and using the Android phone as the display.

These cover a few main areas – to learn how to talk with one wire sensor devices, to earn how to use transistors to act as switches, to learn different forms of serial communication and to learn some new programming languages.

Having next to no current electronic parts (soldering iron, breadboard and my general PC tools were about it) I went down the path of ordering a full set of different bits to make sure I had a good selection of tools and parts to make most circuits I want.

Ended up sourcing most of my electronic components (resister packs, prototyping boards, hookup wire, general capacitors & ICs) from Mindkits in NZ, who also import a lot of Sparkfun stuff giving them a pretty awesome range.

Whilst the Arduinos I ordered supply 5V and 3.3V, I grabbed a separate USB-powered supply kit for projects needing their own feed – much easier running off USB (of which I have an abundance of ports around) than adding yet-another-wallwart transformer. I haven’t tackled it yet, but I’m sure my soldering skills will be horrific and naturally worth blogging about in future to scare any competent electronics geek.

I also grabbed two Dallas 1-wire temperature sensors, which whilst expensive compared to the analog options are so damn simple to work with and can be daisy chained. Freetronics sell a breakout board model all pre-assembled, but they’re pricey and they’re so simple you can just wire the sensors straight back to your Arduino circuit anyway.

Next I decided to order some regular size Arduinos from Freetronics – if I start wanting to make my own shields (expansion boards for the Arduinos), I’d need a regular sized unit rather than the ultrasmall Leostick.

Ended up getting the classic Arduino Eleven/Uno and one of the Arduino USB Droids which provide a USB Host port so they can be used with Android phones to write software than can interface with hardware.

After a bit of time, all my bits have arrived from AU and the US and now I’m already to go – planning to blog my progress as I get on with my electronics discovery – hopefully before long I’ll have some neat circuit designs up on here. :-)

Once I actually have a clue what I’m doing, I’ll probably go and prepare a useful resource on learning from scratch, to cover all the gaps that I found hard to fill, since learning this stuff opens up so many exciting projects once you get past the initial barrier.

Arduino Uno/Eleven making an LED blink. HIGH TECH STUFF ;-)

Push a button to make the LED blink! Sure you can do this with just a battery, switch and LED, but using a whole CPU to read the button state and switch on the LED is much geekier! ;-)

1-wire temperature sensors. Notably with a few more than one wire. ;-)

I’ll keep posting my adventures as I get further into the development of different designs, I expect this is going to become a fun new hobby that ties into my other two main interests – computers and things with blinky lights. :-)

Porting to 2degrees

Having been a long-suffering victim of poor experiences with performance on Vodafone’s data network in NZ and expensive pricing, I’ve now shifted to NZ’s third and youngest mobile provider, 2degrees.

Upgrade from 32k to 128k of SIM memory, woot! ;-)

Two major incentives – firstly unhappiness at Vodafone’s 3G data performance and secondly, unhappiness at the fact that my personal telecommunications expenses are around $350 per month (welcome to NZ, land of expensive comms) and seeking to reduce these somewhat.

I was originally paying $59 a month for my Vodafone service – 120mins, 250 SMS and 300MB data (although boosted to 3GB due to a grandfathered plan promotion). It was pretty good deal when it came out, I signed onto the plan when the first Android phone in NZ launched (HTC Magic) and good data plans for mobiles that didn’t cost a fortune were kind of a new thing.

With 2degrees, I’ve now dropped my bill down to $39 a month, which provides 220mins, 2500 SMS, 100MB data, plus an additional 1GB data bonus for the next 12 months.

There’s a bit of a loss on datacap size, down from Vodafone’s 3GB, but my smartphone and laptop use no more than 1GB all up when combined in regular use, so it’s not really going to impact me.

I also went and dropped the Telecom XT data SIM in my laptop – whilst convenient and bloody fast data, it wasn’t worth the cost for how often I need it – and I can’t really justify when my phone can pair and share the 1.3GB of monthly data it has.

Number porting went very smoothly – after requesting the port online with 2degrees, I got a txt about 3 hrs later confirming it was complete. 2degrees even went to the effort of informing Vodafone and having them close my account which was handy.

It’s been going great since, so far I haven’t encountered any cell towers dropping ~90% of packet data without anybody at Vodafone noticing yet and performance seems speedy and reliable.

Infact the performance of the 2degreees network around Auckland actually beats my DSL at times, especially for the upload, which is pretty tragic. :-/

Sadly the results for 3G performance are sometimes better than my ADSL :-/

I haven’t gone on a rural trip since moving to 2degrees, but it should be just as good as I used to get with Vodafone, as 2degrees uses Vodafone for roaming when outside of their own network zones.

Their plans certainly seem popular – I’ve had at least 2 other friends move to 2degrees, even if you want expensive smartphones, it’s often cheaper to buy the phone outright and use 2degrees no-term monthly than to sign with Telecom or Vodafone due to the savings in plan costs over 24 months, not to mention freedom and flexibility to change plans.

Introducing Smokegios

Having a reasonably large personal server environment of at least 10 key production VMs along with many other non-critical, but still important machines, a good monitoring system is key.

I currently use a trio of popular open source applications: Nagios (for service & host alerting), Munin (for resource graphing) and Smokeping (for latency response graphs).

Smokeping and Nagios are particularly popular, it’s rare to find a network or *NIX orientated organization that doesn’t have one or both of these utilities installed.

There are other programs around that offer more “combined” UI experiences, such as Zabbix, OpenNMS and others, but I personally find that having the 3 applications that do each specific task really well, is better than having one maybe not-so-good application. But then again I’m a great believer in the UNIX philosophy. :-)

The downside of having these independent applications is that there’s not a lot of integration between them. Whilst it’s possible to link programs such as Munin & Nagios or Nagios & Smokeping to share some data from the probes & tests they make, there’s no integration of configuration between the components.

This means in order to add a new host to the monitoring, I need to add it to Nagios, then to Munin and then to Smokeping – and to remember to sync any changes across all 3 applications.

So this weekend I decided to write a new program called Smokegios.

TL;DR summary of Smokegios

This little utility checks the Nagios configuration for any changes on a regular cron-controlled basis. If any of the configuration has changed, it will parse the configuration and generate a suitable Smokeping configuration from it using the hostgroup structures and then reload Smokeping.

This allows fully autonomous management of the Smokeping configuration and no more issues about the Smokeping configuration getting neglected when administrators make changes to Nagios. :-D

Currently it’s quite a simplistic application in that it only handles ICMP ping tests for hosts, however I’m intended to expand in future with support for reading service & service group information for services such as DNS, HTTP, SMTP, LDAP and more to generate service latency graphs.

This is a brand new application, I’ve run a number of tests against my Nagios & Smokeping packages, but always possible your environment will have some way to break it – if you find any issues, please let me know, keen to make this a useful tool for others.

To get started with Smokegios, visit the project page for all the details including installation instructions and links to the RPM repos.

If you’re using RHEL 5/6/derivatives, I have RPM pages for Smokegios as well as Smokeping 2.4 and 2.6 series on amberdms-custom and amberdms-os repositories.

It’s written in Perl5, not my most favorite language, but it’s certainly well suited for this configuration file manipulation type tasks and there was a handy Nagios-Object module courtesy of Duncan Ferguson that saved writing a Nagios parser.

Let me know if you find it useful! :-)