Big Bubbles (no troubles)

What sucks, who sucks and you suck

Puppet and a Load of Cobblers

Spent the last two weeks looking at Puppet and Cobbler. Still not up to speed on Puppet; the one useful document would be a comparative guide that translates common tasks in Cfengine to Puppet equivalents. As with Cfengine, the best adoption strategy appears to be automating trivial, harmless but useful actions first (e.g. keeping SSH running) and gradually widening the scope of the rules, particularly for fresh clients.

Cobbler is more straightforward once you wrap your head around it. The web interface is so basic, you might as well learn the commands. Brush up on your Kickstart scripting too! Tying it in to an existing mrepo repository mirror was a minor challenge; it wants to do the mirroring itself, so you need to treat the repos as web resources that shouldn’t be mirrored and supply URLs that are usable by the boot clients. My aim is to have Cobbler perform the minimum of build configuration, to the point where Puppet can be run to customise the system as required (Puppet is more maintainable in the long term than a Kickstart script.)

This work has made me consider the differences in system lifecycle between my previous employer and my current one. THEN (IT infrastructure owned and managed by business for single purpose):

  • Build and install server to defined role (web, application, etc.), including required software and (generic or existing) config;
  • Tune server, fix issues, patch, support new functionality => configuration gradually evolves according to need, generally changing per application release;
  • Decommission and remove server, possibly replacing with newer system but generally similar servers will remain in operation.

NOW (IT infrastructure managed on behalf of other businesses for many different purposes):

  • Build base OS and complete standard service wrap (simple monitoring, backup cover, etc.) - may include basic LAMP install or whatever has been specified, but anything else on top will be bespoke and probably unique to this system;
  • Customer configures server/applications according to need - this part is generally opaque to us unless our involvement has specifically been requested, and probably won’t match anything else we manage;
  • Leave server in place, untouched for length of contract (patching only for critical issues, by prior agreement with customer or on customer request) - configuration generally doesn’t change much/at all;
  • Decommission and remove server - if contract ends, “forget” the configuration.

An automated config system like Puppet is a much more useful tool in the first scenario, where there is complete end-to-end control of the overall system, a lot of commonality between servers and clearly defined roles or subsets of configuration types. In the second case, the scope of what you can manage automatically is greatly reduced, since so much of it is under the customer’s control. Furthermore, there’s unlikely to be much evolution of the requirements once the server is in place and signed off, since that requires change control and possibly commercial input, so the need to make global, common changes across a large number of servers at once is rare (other than critical patching, and that’s better managed through the vendor-supplied update mechanism anyway).

Nevertheless, even if Puppet is only used in anger at build time to install the base configuration (including OS hardening) and a small number of generic, defined roles (LAMP stack, database server, etc.), it has the benefit of providing an easier, more flexible way to maintain and update that information.