With modern Linux systems – hell, even systems from 5+ years ago – there’s usually very little issue with handling large files (> 2GB), in fact files considered large a decade ago are now tiny in comparison.
However sometimes poor sysadmins like myself have to support much older machines, in my case, a legacy accounting platform which is tied to the RHEL 2.1 host it was installed on and you suddenly get to re-discover the headaches that plagued sysadmins before us.
In my case, the backup scripts for this application suddenly stopped working recently with the error of:
cpio: standard input is closed: Value too large for defined data type
Turns out that their data had finally crept over the 2GB limit, which left cpio able to write the backup, but unable to read it for verification or restore purposes.
Thankfully cpio does support largefiles, but it’s a case of adding -D_FILE_OFFSET_BITS=64 to the gcc options at build time, so I built which fixes the problem (or at least till we hit the 16GB filesystem limits) ;-)
The version of cpio on the server is ancient, dating back to 2001 (with RHEL 2.1 being first released in 2002), so it’s over a decade old now, and I found it quite difficult to obtain the source for the specific installed version of cpio on the server, Red Hat seemed to be missing the exact release (they have -23 and -28, but not -25) so I pulled the Red Hat 8 source which comes from around the same time period – one of the advantages of having RHN is being able to quickly pull old packages, both binary and source. :-)
If you have this exact issue with a legacy system using cpio, feel free to grab my binary or source package from my repos and save yourself some build time. :-)