I’ve installed so many open source software packages in my time, it’s frustrating when I occasionally come across one that isn’t built and running in five minutes. ./configure; make; make install What’s so hard about that? A monkey could do it (OK, an IT journalist would need some training).
Actually, there should be more to it than that. If you simply accept the defaults and whack this stuff in, very quickly you’re going to end up with “the Linux dweeb’s home PC”. This is a system that contains a thick morass of untraceable, unidentifiable and frequently obsolete files in /usr/local (the default installation directory for all those packages). Don’t do it! It gives Windows and Mac users ammunition to complain about the “anarchy” of Unix. Plus, you’ll never figure out what’s been installed and (potentially) what’s causing conflicts with your new OS upgrade.
Lazy people should opt for their platform’s native packaging system (RPM, pkg, deb, etc.) and find a trusted source of third party pre-built packages if they need something beyond what the vendor supplies. This way, you have some measure of auditing and control over your software set and you don’t have to get your hands (or mouth) dirty compiling from source.
Smart people (like you, right?) create a well-maintained software depot, as described by Limoncelli & Hogan. In a nutshell:
* Pick some standard, sane compilation options and use them for everything. Unless you habitually debug third party software, you probably want to turn on some optimisation (-O) and perhaps strip the binaries (LDFLAGS=-s).
* Check the configuration options (
./configure --help), don’t just accept the defaults.
* Put each package in a dedicated, self-contained directory, identified by package name and release number (e.g. less-358/, gzip-1.3.3/). That way, you know which files belong to which packages. As a bonus, you can install later versions without overwriting existing ones.
* For ease of use, symlink all those files and subdirectories into a central set of directories that can be added to user PATHs, etc. E.g. /tools/bin/, /tools/lib/, /tools/info/ Use an automated tool for this; I’m a big fan of Graft. (For something fancier and more fascist, try Stow).
* Watch out! Many developers make rash assumptions about how their software will be installed. You may have to add options to locate configuration files and dynamic state information in host-local directories (e.g. /var). Worst case, you may have to frig the package. (Smart developers implement command line options to select different config files.) DON’T store writeable files in your depot tree. I’m assuming you’re going to replicate or share your depot across multiple hosts for consistency, in which case local file variations will be overwritten. Also, a package upgrade will need the config carrying over from the previous version unless it’s held outside the depot.
* Script the command lines used to configure each package and save the scripts somewhere central. You’ll have a record of how each package was built and you’ll be able to build later releases identically. I use this generic script as a template for most standard GNU-autoconf packages, inserting additional options as required.
* Use some automated assistance in distributing the depot across multiple hosts. It could be as simple as rsync’ing the entire directory tree to each host, or you could use NFS & automount where appropriate (note: never NFS alone).