Tag Archives: code

Any posts relating to software that I have developed, including contributions to other projects, but excluding any packaging work.

Twitter Auto Delete

Despite me making a clean break from Twitter earlier this year, I’ve ended up back on it on a casual basis, mostly due to the number of my friends on there who only chat or are only reachable via it. :-(

I decided that this time I’d like to treat Twitter more like an IRC chat room, ie a place to chat casually with friends, but not as a formal permanent record – so I made some tweaks to how I was using it:

  1. Primary interaction with Twitter is via PrplTwtr, a plugin for Pidgin, which makes Twitter act like any other chat room, to avoid the habit of having Twitter open in my browser being an invasive distraction. If friends @reply me or DM me, I get a new IM message notification, but otherwise I can ignore it happily.
  2. I wrote a small script that automatically goes and deletes all my Twitter messages after 24 hours – this is enough time for me to chat comfortably with friends, but makes it hard for outsiders to go and data mine my feed and it’s less of a permanent recordable cached record, or link to my tweets long term.

It’s not a perfect setup, whilst it prevents someone from casually going back and seeing my history and engagements with others, it doesn’t stop someone recording my tweets over an extended period to build up their own data pool about me, and of course I have no way of knowing if when I delete a tweet, if it really disappears from the pool of information that Twitter sells to data miners to use.

But it’s good enough that I can chat with friends and keep up-to-date with their lives without leaving a huge digital footprint for any randoms to trawl through.

There are some auto-deleter services around, but I didn’t trust any of them to not do malicious things with my account (eg spamming their presence), plus I wanted it to delete all my tweets *except* my blog post feed.

I found that there’s a pretty decent Twitter module for Python and decided to use this as an exercise to finally learn some proper Python, something I’ve somewhat avoided for lack of a good learning exercise.

The result is a simple Twitter auto-deleter script that is called by Cron every 4 hours and runs a check and deletes any tweets older than 24 hours – the basics is pretty simple really:

39    # query my user status list
40    mytimeline = api.GetUserTimeline(screen_name=user_name,count=query_quantity,include_rts=True)
41    
42    for status in mytimeline:
43    
44        if re.match("^New Blog Post", status.text):
45            #print "Blog post! No delete wanted"
46            continue
47    
48        if status.created_at_in_seconds < cond_time_before:
49            api.DestroyStatus(status.id)
50       
51            print "Deleting Tweet:"
52            print "- Created At: " + status.created_at
53            print "- Content: " + status.text

Note that with GetUserTimeline, you need to specify include_rts=True as an explicit option, so that it includes anything you’ve retweeted in the timeline returned.

Favorites are special wee critters and require a separate GetFavorites call, I don’t use Favorites, so wanted this delete to remove any favorites created by accidental miss-clicks.

You can check out my source here – if you want to run it on your own server, you’ll need to use your account to setup a dev API key and access tokens etc. And you may want to adjust things like the deletion of favorites or retention of blog posts.

I’ve pondered turning this into a simple web-hosted service for people to use, so if you’re the sort of person who can’t use this script yourself but would like the ability to auto-delete your tweets, let me know and I’ll ponder doing it if there’s interest.

I’m sure Twitter will probably kill off more and more of these API calls in future, but at the moment they’re exposing just enough logic to enable me to do this. :-)

Do note that if you run this on a big account, you will hit the maximum API call limit VERY quickly, hence a configured query quantity limit to restrict how many tweets are loaded per execution – you could get away with several hundred every 60mins if you wanted to delete all your twitter history as fast as possible without actually blowing away the account.

cifs, ipv6 and rhel 5

Unfortunately with my recent project enabling IPv6 across my entire personal server environment, I’ve bumped into a number of annoying issues – nothing that isn’t fixable, but things that are generally frustrating and which just shouldn’t be an issue.

Particular thanks goes to my many RHEL/CentOS 5 virtual machines, which lack some pretty key stuff such as:

  • IPv6 connection tracking preventing the ESTABLISHED,RELATED ip6tables rules from working.
  • Unexpected behavior of certain bootscript configuration options.
  • Lack of IPv6 support with CIFS (Samba/SMB) share mounting.
  • Some weirdness with Dovecot I still need to resolve.

(Personally, based on the number of headaches I’ve found with RHEL 5, my recommendation is accelerate any plans to upgrade to RHEL 6 – or some other distribution – before deploying IPv6 in production.)

At the moment, CIFS IPv6 support on RHEL 5 & 6 has been causing me the most pain. My internal file server is dual stacked and has both A and AAAA DNS records – it’s a stock-standard CentOS 6 box running distribution-shipped Samba packages and works perfectly from the server side and modern IPv6 hosts have no issue mounting the shares via IPv6.

Very typical dual stack configuration:

# host fileserver.example.com 
fileserver.example.com has address 192.168.0.10
fileserver.example.com has IPv6 address 2001:0DB8::10

However, when I run the following legitimate and syntactically correct command to mount the CIFS share provided by the Samba server on other RHEL 5 hosts, it breaks with a error message that is typical of incorrect syntax with the mount options:

# mount -t cifs //fileserver.example.com/tmp /mnt/tmpshare -o user=nobody
mount: wrong fs type, bad option, bad superblock on //fileserver.example.com/tmp,
       missing codepage or other error
       In some cases useful info is found in syslog - try
       dmesg | tail  or so

Taking a look a the kernel log, it shows a non-descriptive error explanation:

kernel:  CIFS VFS: cifs_mount failed w/return code = -22

This isn’t particularly helpful, made more infuriating by the fact that I know the command syntax is correct and should be working perfectly fine.

Seeing as a number of things broke after switching on IPv6 across the entire network, I’ve become even more of a cynical bastard and ran some tests using specifically stated IPv6 and IPv4 addresses in the mount command.

I found that by passing the IPv6 address instead of the DNS name, you can produce the additional error message which offers some additional insight:

kernel: CIFS: ip address too long

Huh. Looks like a text book IPv6 support bug to me. (Even I have made this mistake in some older generation web apps that didn’t foresee long 128-bit addresses).

In testing, I found that the following commands are all acceptable on a dual-stack network with a RHEL 5 host:

# mount -t cifs //192.168.0.10/tmp /mnt/tmpshare -o user=nobody
# mount -t cifs //fileserver.example.com/tmp /mnt/tmpshare -o user=nobody,ip=192.168.0.10

However all ways of specifying IPv6 will fail, as well as pure DNS resolution:

# mount -t cifs //2001:0DB8::10/tmp /mnt/tmpshare -o user=nobody
# mount -t cifs //fileserver.example.com/tmp /mnt/tmpshare -o user=nobody,ip=2001:0DB8::10
# mount -t cifs //fileserver.example.com/tmp /mnt/tmpshare -o user=nobody

No method of connecting via IPv6 would work, leaving stock RHEL 5 hosts only being able to work with CIFS shares via IPv4. :-(

Unfortunately this error is due to a known kernel bug in 2.6.18, which was fixed in 2.6.31, but sadly not backported to RHEL 5’s kernel (as of version 2.6.18-308.8.1.el5 anyway), leaving RHEL 5 users in a position where the stock OS is unable to mount CIFS shares on an IPv6 or dual-stacked network. :-(

The ideal solution would be to patch the kernel to resolve the issue – and in fact if you are running on a native IPv6-only (not dual stacked), it would be the only option to get a working solution.

However, typically if you’re using RHEL, custom kernels aren’t that popular due to the impact they make to supportability/guarantee of the platform by vendor and added headaches of security update tracking and application, so another approach is needed.

The following methods will all work for stock RHEL/Centos 5:

  • Use the ip=X mount option to overule DNS.
  • Add an entry to /etc/hosts.
  • Have a separate DNS entry that only has an A record for your file servers (ie //fileserverv4only.example.com/)
  • Disable IPv6 entirely (and suffer the scorn of your cooler IPv6 enabled friends).

These solutions all suck – having manually fixed IPs isn’t great for long term supportability, additional DNS records is an additional pain for management, and let’s not even begin to cover why disabling IPv6 entirely is wrong.

Of course RHEL 5 is a little outdated now, so I took a look at how RHEL 6 fared. On the plus side, it *can* mount IPv6 shares, all of the following mount commands are accepted without fault:

# mount -t cifs //192.168.0.10/tmp /mnt/tmpshare -o user=nobody
# mount -t cifs //2001:0DB8::10/tmp /mnt/tmpshare -o user=nobody
# mount -t cifs //fileserver.example.com/tmp /mnt/tmpshare -o user=nobody,ip=192.168.0.10
# mount -t cifs //fileserver.example.com/tmp /mnt/tmpshare -o user=nobody,ip=2001:0DB8::10

However, any mount of a IPv6 server using the DNS name will still fail, just like how they did with RHEL 5:

# mount -t cifs //fileserver.example.com/tmp /mnt/tmpshare -o user=nobody

The solution is that you need to install the “cifs-utils” package which provides the /sbin/mount.cifs binary offering smarter handling of shares – once installed, all mount command options will work OK on RHEL 6, including the standard DNS-based command we all know and love. :-D

I had always assumed that all Linux systems that could mount CIFS shares had the /sbin/mount.cifs binary installed, but it seems that’s not the case, rather the standard /bin/mount command can handle mounting CIFS using just the standard kernel mount() function

However when /bin/mount detects a /sbin/mount.FILESYSTEM binary, it will call that process instead of calling the kernel mount() directly, these binaries can offer additional logic and handling off the mount command before passing it through to the Linux kernel.

For example, the following strace from a RHEL 5 host shows that /sbin/mount checks for the existence of /sbin/mount.cifs, before then going on to call the Linux kernel mount() directly with the provided arguments:

stat64("/sbin/mount.cifs", 0xbfc9dd20)  = -1 ENOENT (No such file or directory)
...
mount("//fileserver.example.com/tmp", "/mnt", "cifs", MS_MGC_VAL, "user=nobody,password=nobody") = -1 EINVAL (Invalid argument)

But a RHEL 6 host with cifs-utils installed provides /sbin/mount.cifs, which appears to do it’s own name resolution, then establishes a connection to both the IPv4 and IPv6 sockets, before deciding which to use and instructs the kernel using the ip=X parameter.

stat64("/sbin/mount.cifs", {st_mode=S_IFREG|0755, st_size=29376, ...}) = 0
clone(Process 1666 attached
...
[pid  1666] mount("//fileserver.example.com/tmp/", ".", "cifs", 0, "ip=2001:0DB8::10",user=nobody,password=nobody) = 0

So I had an idea….. what if I could easily modify a version of cifs-utils to run on RHEL 5 dual-stack servers, yet only ever resolve DNS queries to IPv4 addresses to work around the kernel issue? :-D

Turns out you can – effectively I just made the nastiest hack ever by just tearing out the IPv6 name resolver. :-/

I’m going to hell for this, but damn, feels good man. ;-)

I wasn’t totally evil, I added an info level syslog notice about the IPv4 enforcement incase any poor admin is ever getting puzzled by someone’s customized RHEL 5 box refusing to connect to CIFS shares IPv6 – that would be a bit too cruel. ;-)

The hack is pretty crude, it actually just breaks the IPv6 socket connection attempt and so it then falls back to IPv4, so it throws up a couple errors in the logs, but doesn’t actually impact the mounting at all.

mount.cifs: Warning: Using specially patched cifs-utils to ignore IPv6 address resolution - enforcing IPv4 only!
kernel:  CIFS VFS: Error connecting to socket. Aborting operation
kernel:  CIFS VFS: cifs_mount failed w/return code = -111

But wait, there’s more! I have shiny cifs-util i386/x86_64/SRPM packages with this evil hack available for download from amberdms-os repository (or directly from the server here).

Naturally this is a bit of a kludge, don’t trust it for mission critical stuff, you ONLY need it for RHEL 5, not RHEL 6 and I can’t guarantee it won’t eat all your data and bring upon the end times, etc, etc.

I’ve tested it on my devel systems and it seems like the nicest fix – sure it won’t work for any hosts needing to run on native IPv6, but by the time I come to drop IPv4 addressing entirely I certainly will have moved on my last hosts from RHEL 5 to something a bit newer. :-)

Introducing Smokegios

Having a reasonably large personal server environment of at least 10 key production VMs along with many other non-critical, but still important machines, a good monitoring system is key.

I currently use a trio of popular open source applications: Nagios (for service & host alerting), Munin (for resource graphing) and Smokeping (for latency response graphs).

Smokeping and Nagios are particularly popular, it’s rare to find a network or *NIX orientated organization that doesn’t have one or both of these utilities installed.

There are other programs around that offer more “combined” UI experiences, such as Zabbix, OpenNMS and others, but I personally find that having the 3 applications that do each specific task really well, is better than having one maybe not-so-good application. But then again I’m a great believer in the UNIX philosophy. :-)

The downside of having these independent applications is that there’s not a lot of integration between them. Whilst it’s possible to link programs such as Munin & Nagios or Nagios & Smokeping to share some data from the probes & tests they make, there’s no integration of configuration between the components.

This means in order to add a new host to the monitoring, I need to add it to Nagios, then to Munin and then to Smokeping – and to remember to sync any changes across all 3 applications.

So this weekend I decided to write a new program called Smokegios.

TL;DR summary of Smokegios

This little utility checks the Nagios configuration for any changes on a regular cron-controlled basis. If any of the configuration has changed, it will parse the configuration and generate a suitable Smokeping configuration from it using the hostgroup structures and then reload Smokeping.

This allows fully autonomous management of the Smokeping configuration and no more issues about the Smokeping configuration getting neglected when administrators make changes to Nagios. :-D

Currently it’s quite a simplistic application in that it only handles ICMP ping tests for hosts, however I’m intended to expand in future with support for reading service & service group information for services such as DNS, HTTP, SMTP, LDAP and more to generate service latency graphs.

This is a brand new application, I’ve run a number of tests against my Nagios & Smokeping packages, but always possible your environment will have some way to break it – if you find any issues, please let me know, keen to make this a useful tool for others.

To get started with Smokegios, visit the project page for all the details including installation instructions and links to the RPM repos.

If you’re using RHEL 5/6/derivatives, I have RPM pages for Smokegios as well as Smokeping 2.4 and 2.6 series on amberdms-custom and amberdms-os repositories.

It’s written in Perl5, not my most favorite language, but it’s certainly well suited for this configuration file manipulation type tasks and there was a handy Nagios-Object module courtesy of Duncan Ferguson that saved writing a Nagios parser.

Let me know if you find it useful! :-)

Day 22 – Release some software under an open source license that you haven’t released before.

This late post is part of my 30 days of geek challenge.

I’ve released a bit of software before under open source licenses – originally mostly scripts and various utilities, before moving on to starting my own open source company (Amberdms Ltd) which resulted in various large applications, such as the Amberdms Billing System and centralised authentication components like LDAPAuthManager.

The other day I released my o4send application, which is a utility for sending bluetooth messages to any phones supporting OPP and today I pushed a new release of LDAPAuthManager (version 1.2.0) out to the project tracker.

 

I haven’t talked about LDAPAuthManager much before – it’s a useful web-based application that I developed for several customers that makes LDAP user and group management easy for anyone to use without needing to understand the pain that is LDAP.

It’s been extended to provide optional radius attribute support, for setting additional values on a per-user or per-group, making LDAPAuthManager part of a wider centralised authentication solution.

 

For other open source goodness, all my current open source components developed by Amberdms can be found on our Indefero project tracker at www.amberdms.com/projects/.

There’s a lot that I have yet to release – releasing means I need to validate the documentation, package, test and then upload so I can be sure that everyone gets the desired experience with the source, so it can be tricky to find the time sometimes :-/

Introducing o4send

Awhile ago, Amberdms was contracted to develop an application for sending messages to bluetooth enabled mobile phones for the NZ world expo.

Essentially the idea was that people would visit the expo, receive a file on their mobiles and receive some awesome content about New Zealand. The cool thing about this was that you didn’t need to be paired, any phone with bluetooth active would get this message.

Apparently this worked quite nicely, although I’m not convinced that OPP will be much use for the future, with the two major smartphone platforms (Android and iPhone/iOS) not providing support for it – we found that it worked best with Nokia Symbian phones.

To make this work, I wrote a perl script and coupled it with a CSV or MySQL database backend to track the connections and file distributions – I bundled this into a little application called “o4send” which I’ve now released the source publicly.

You can check out the source and download the application at the Amberdms project tracker at: https://www.amberdms.com/projects/p/oss-o4send/

Take care with this application, it can talk to a lot of mobile phones and I’m not sure of the legality of sending unsolicited messages to bluetooth devices – but I figured this source might be useful to somebody oneday for a project – or at the very least, a “hey that’s cool” moment.