Back in 2016, BB sang the praises of the Fountain Restaurant, an overlooked and increasingly neglected modernist gem in the centre of Chester Zoo. (The design of the Fountain was apparently driven by the availability of the original windows, offered by Pilkington Glass after an order was cancelled.) Forced to close during the pandemic, the zoo spent the time catching up on various planning and administrative tasks, one of the results of which was a planning application in May 2020 to demolish the old Fountain building.
(What’s most amusing from the linked article is that the journalist clearly googled/plagiarised our original blog post for reference, a dangerous method given we didn’t verify any of our original sources - but then, we’re not being paid as a professional to do that. On the other hand, as one of the reactions in the ZooChat forum was “Backwater? Who writes this stuff?”, perhaps we’ll let the Cheshire Live journo carry the can.)
The building is not in good repair and apparently forms an unwanted barrier to movement across the ‘Madagascar Zone’. From the picture above, taken only recently, you can see that it is now obscured by the newly planted play area that has replaced the formal lawns and fountain originally in front. Stripped of this context, the building makes much less sense and in its current sorry state, BB regretfully concludes that demolition might as well go ahead given the changes to its surroundings. There was a brief window of opportunity there to rededicate it as a space for family time out amidst the sensory overload of the exhibits, but the loss of the gardens rather puts paid to that (the prevalence of tall grass planting and themed landscaping in the rest of the zoo suggests flower gardens are no longer a priority at Chester, which presumably lowers the maintenance budget as well). Not to mention, there was almost no likelihood of reopening the balcony and dining function (imagine being able to take tea while surveying the zoo like a true aristocrat though!).
Reaction on ZooChat has been mostly philosophical or accepting of the need for progress, with some commenting that it was a rather “mediocre” example of post-war modernism whose loss is to be little regretted. Apparently, zoos have other, more important functions than architectural preservation, who knew? (To be fair, all exotic bird species look the same to BB.) Zoos have always been unsentimental about removing or updating exhibits whose time has passed due to changing fashions or increased knowledge - even the lions have lately moved from the grassed enclosure they’ve occupied for decades to a fancy new simulated savannah at the other end of the park - so it’s little surprise that the Fountain is the latest victim of evolutionary trends. (After all, Bristol Zoo are quitting their central Clifton location altogether for a satellite site on the outskirts that offers more expansion, leaving their rather fine aquarium building amongst others to the tender mercies of housing developers.) But it is a loss, one more example of the ‘better future’ promised in the 1950s and 60s that somehow failed to come to pass, the remnants of which diminish further in number each year.
]]>On the other hand, if you just want to stay inside and listen to records on the old Dansette this autumn, it may yet be a good year for a trio of remarkable albums released fifty years ago that have still not received due notice outside the circles of collectors.
Next month, Earth Recordings will release 50th anniversary box set editions of the only two albums by the band Trees. And last month, the sole album by Mellow Candle received a rare official release in limited edition white vinyl for Record Store Day.
In 2020 these are not, to be sure, particularly obscure rarities. Both have been easily available on CD for the past decade, the Trees albums being reissued by Sony (following use of a sample by Gnarls Barkley) and Mellow Candle by Esoteric, all in 2008. Among enthusiasts of ‘lost’ treasures of the early 70’s prog era, those bands and albums that sank without trace in their day, they’re now common currency although still highly regarded, regularly appearing near the top of lists of ‘classic’ acid folk records. Outside that tribe though, they have much lower recognition, which is a shame as to hear them is to cherish an encounter with the heady experimentation of 1970/71 at a time when the cultural adventurism of the late sixties was bearing arguably the richest fruit.
Of the two Trees albums, their 1970 debut “The Garden Of Jane Delawney” is a tentative, uncertain affair, offering extended ensemble jams based around traditional tunes (such as the notable soloing that closes their version of “Lady Margaret”) amidst occasionally slender material of their own composition, the clearest exception being the title track which sounds as timeless as any folk standard. (Not surprisingly, it has been much covered since, including versions by Francoise Hardy and All About Eve.) That said, the mellow album closer “Snail’s Lament” is in my view an overlooked gem, offering a generous sentiment that belies their reputation for dark undertones.
“On The Shore”, recorded later that same year by a markedly more experienced group, is assured by contrast, striking early gold with the oddly compelling tale of madness on the mountain, “Murdoch”, inspired by the surroundings of Cadair Idris. It apparently came to writer Bias Boshell in a dream; it’s hard not to imagine that it was sent by the mountain itself. It contains one of my favourite lyrics:
Brighter still the flower of life may grow away from the sun
Knowing that with darkness a new light has come
Vocalist Celia Humphris was not a natural folk singer by inclination but brought her drama training to bear in playing one, with a lower register that could be by turns delicate, haunting, or portentously imperious on a track like “Fool”. (It did no harm that she also looked the archetypal boho-hippy ingenue of the period; there are no bad pictures of Celia Humphris.)
The record peaks at its midpoint on their definitive, unique rendition of the Cyril Tawney tune “Sally Free And Easy”. Recorded live, with producer Tony Cox filling in on bass while Bias Boshell moved to the piano, it’s an astonishing ten minute diorama of a group of musicians in near-telepathic communion, rightly described by Stewart Lee in his sleevenotes to the 2008 reissue as “one of the greatest recordings by any British band ever”. The lyric is redolent of pressganged sailors from naval frigates dying on distant shores as they mourn faithless lovers, making the track an effective counterpart to the title cut “Polly On The Shore”, although it was originally written from the perspective of a submariner (which Tawney had once been). Tawney himself thought Boshell’s floridly melodramatic piano opening sounded like “a silent movie soundtrack”, but it gives way to a shimmering acoustic thrum that builds with unnerving control through two verses before exploding into wild abandon alongside the double-tracked wailing of Celia Humphris. Heralding this dynamic shift, the delicate underlying arpeggio that has accompanied the music this far gives way to a single note insistently chiming off the beat - the fortuitous result, in fact, of guitarist David Costa’s fingers giving out at this point. The maelstrom that follows is intense and unearthly, the wretched lament of a curtailed mortality meeting acceptance of the imminent embrace of the grave; it pulled me up short as I was playing the album for the first time while slaving over a Christmas dinner in 2010. As it subsides again gradually to a closing verse of final, bitter regret, the sense that you have just witnessed that fleeting moment of clarity between inspiration and enervation, when the Muse alights upon a group of musicians labouring away in the studio late into the night, is inescapable.
It should have been the end of the beginning for Trees, putting them on a firm heading for the rewards due from the successful melding of psychedelic rock and folk music. Instead it marked their high water mark as, starved of attention and funds from their record company, let alone a receptive audience, they quickly grew disillusioned at being unable to capitalise on their discovery. A modified line-up soldiered on for a further couple of years, long enough to leave behind one poorly recorded bootleg of live sessions, but by the end of 1973 it was all over for them.
As Trees were entering their eclipse, Mellow Candle were recording their sole album “Swaddling Songs” in December 1971, for release the following year. Moving to London from Ireland in the wake of an unsuccessful psychedelic pop single and a few false starts during the previous four years, they were contracted for one record to Deram. There is no trace here of the loosely improvised rock jamming that characterised Trees; instead, the arrangements are tightly disciplined with not a single note sounding left to chance. But the music is often exuberant and pell-mell, like a trad folk session in full flight behind the locked door of the local pub after hours. Although it is clearly informed by and belongs to the folk idiom, it is not of the tradition. There are no standard covers here, only a strong set of strikingly original material fronted by the interweaved harmonies of Clodagh Simonds and Alison O’Donnell in an exquisitely sympatico band setting that refuses to indulge in any of the standard rock gambits to serve the music. (They may also be the only band to have lost one bass player because there were “too many drugs taken” and a second because there “weren’t enough drugs taken”. By comparison, Trees, despite their extended free-for-all jams, barely indulged at all.)
Perhaps the only flaw that can be levelled at the opening salvo of “Heaven Heath” and “Sheep Season”, the latter featuring a soaring vocal harmony in the chorus that’s eerily prescient of Abba, is that by putting the two songs with the strongest immediate appeal up front, the rest of the album then demands closer attention from the listener to fully appreciate the craft involved. But the sheer quality of tracks like the spectral “Reverend Sisters” with its restrained piano backing by Simonds, the dramatic “The Poet And The Witch” or the manic “Boulders On My Grave” shines through.
Once again, it cast barely a ripple at the time. Lack of support and promotion did for Mellow Candle exactly as it had done for Trees, and with a more brutal impact on relationships within the band. The partnership of Simonds and O’Donnell was sundered for the first time since high school, and both moved on to find other ways of making a living from music.
Fortunately, all these albums have since been rediscovered, buffed up and more widely distributed so that we can fully, if belatedly, savour their unique, never-to-be-repeated vignettes of an early 70s milieu, when for a brief time every possible future in popular music was up for grabs if you were only receptive to the vibes. True, official releases of any of them on vinyl are often harder to come by and usually only appear in limited editions, but reasonable facsimiles are floating around eBay at affordable prices and besides, there are always the CDs if you want the best sounding versions rather than the period-authentic ones. (A few leftovers of the Mellow Candle RSD LP can be found on eBay at time of writing, along with a more common pressing on the Tapestry label of dubious provenance but tolerable representation. For the Trees albums, get your pre-order in now.) If I can round this recommendation out to a solid trio of acts, you should also pick up the Esoteric reissue of Fuchsia’s self-titled 1971 album - a record occupying a unique niche between psychedelic folk/rock and acoustic pop from a band line-up that featured an all-female string & backing vocals section, it’s a remarkable achievement of writing and arrangement by leader Tony Durant.
I’m not claiming to be an expert on how Secure Boot works (all my knowledge was gained on a JIT basis), but my understanding of it thus far is contained within the following:
01-00-50-DE-AD-BE-EF
. Unfortunately, it turns out that
this behaviour was added to GRUB by Red Hat in a
downstream patch and is specific to their version.
Remember I said you have to use the distro-specific GRUB loader signed
with the same key as everything else? So yep, this won’t work for
non-EL distros. Instead, we’ll need to load a common grub.cfg
configuration file that then sources a second configuration named after
the client MAC address.
(Needless to say, the MAC address format used by Red Hat’s GRUB and
thus generated by Cobbler differs from the one returned in the
standard GRUB2 client variables. Hack, hack.)mokutil
. It then prompts you interactively for a
password, which must be entered on the next boot to confirm import of
the new key into the firmware database so that the driver will be
authenticated to load. As far as I can tell, this process cannot be
automated, at least via Ansible (doubtless you could build a custom
integrated distro instead but…). Despite all our effort to make
Secure Boot work, this caught us out in the end and resulted in us
disabling it on each client. But up until that point, it worked so I’m
providing the recipe here for anyone with more modest requirements.In our case, PXE client configuration is a little simpler or at least less
of a concern as we use external DHCP servers and configure the client boot
parameters such as the initial filename separately. (If you’re using
Cobbler for DHCP: sorry, you’re on your own but see the notes above.) As
mentioned, our specific use case was to boot Ubuntu Bionic 18.04 LTS on
Dell 5490 laptops from a CentOS 7 host running the cobbler-2.8.4-4 package
from EPEL.
In the end, we used the method suggested by this Russian blog post
(Google translation - don’t copy the shell source from this link as it
will be corrupted). The post suggests adding a Cobbler post-sync trigger
script to create the required GRUB configurations with the correct
filenames by copying and renaming the ones generated by Cobbler. However,
in our case we also need to convert the GRUB-legacy configurations to
GRUB2 syntax and reformat the client MAC addresses to be compatible with
that used in the GRUB2 $net_default_mac
variable, which is colon-
rather than hyphen-separated. The revised configurations are written to
a uefi/
subdirectory under the TFTP boot folder, along with a default
(initial) config that simply sources the appropriate client MAC-specific
file.
Step-by-step then, here’s what to do:
sample.seed
(and good luck with it because the syntax isn’t well
documented)./var/lib/tftpboot/uefi/
./usr/lib/shim/shim64.efi
and
/usr/lib/grub/x86_64-efi-signed/grubnetx64.efi
. Copy them to the
uefi/
folder above, renaming as shim64.efi
and grubx64.efi
respectively./var/lib/cobbler/triggers/sync/post/uefi.sh
.uefi/shimx64.efi
. As I noted above, if you’re using Cobbler as the DHCP
server, this may require a bit more hacking of
/etc/cobbler/dhcp.template
.cobbler sync
and check the generated files under
/var/lib/tftpboot/uefi/
.(With thanks to all the authors of the various blog, forum and GitHub posts I googled to figure all this out.)
Questionable sartorical choices aside, I quite like it. The song is not especially memorable but the arrangement has interest and their performance is accomplished. At this point, the band were about six months away from their eventual demise. A fourth and final album, Skintight, would emerge later that year, its revamped and distinctly American-influenced sound alienating long term fans and resulting shortly thereafter in the dissolution of the group.
Skin Alley were managed by Clearwater Productions, alongside the great ‘lost’ folk-rock act Trees who themselves were petering out around this time, another victim of fading interest from record labels that had rushed to sign a glut of ‘progressive’ acts in the preceding two years and were now clearing their decks in favour of more commercial propositions such as Glam. They’d had some early success playing free festivals and benefit shows, although I don’t believe this had translated into significant album sales. Like Trees, Skin Alley signed a deal with CBS and produced two albums before being dropped, although they did scrape another deal that spawned a further couple of releases. Did they know the jig was nearly up at this point? The rest of the Ragnarock line-up that year, solid but hardly stellar with a distinct whiff of last year’s news about it - Mungo Jerry, The Pretty Things, Culpepper’s Orchard - may have offered a few hints. Perhaps they had an inkling. The first glimmers of punk would be on the horizon by the following year, in alleged reaction to the detached excesses of the prog years. The top tier prog acts - Yes, Genesis and co - were steadily iterating the foundations of lasting careers, while the second tier (Gentle Giant, Soft Machine, Caravan, etc.) had enough gas left in the tank to see them into at least the second half of the decade. But what was it like to be on the descending curve of your arc, the milieu that birthed you having had its day and marked for succession by the Next Big Thing? Even if Skin Alley had kept the faith with their fanbase for their fourth album, it seems unlikely the path of history would have been much different for them. In that sense, the Ragnarock footage, for all its qualities, feels elegiac.
]]>To use this, you should be running at least Ansible 2.7.5 as the module
was broken in older 2.7 releases.
I assume you already have the prerequisites, including an account with
administrator privileges on your vCenter, a basic knowledge of how to
create a new VM in vSphere, and so forth. To enable us to create new VMs
for the systems we need to manage with our playbooks, first of
all we wrap the vmware_guest
module in a role. The role uses a
combination of standard or typical default values, global variables for common
settings and per-host variables specific to the VM in question. For our
own purposes, we only need to be concerned with basic Linux VMs of a mostly
similar specification, so we don’t worry about customising the
configuration for different OS platforms.
For example, the role defaults might be:
1 2 3 4 5 6 |
|
This defines our standard VM SCSI controller, firmware, disk provisioning, hardware version and network device (all of these are compatible with CentOS, for example).
The global settings are defined in the group variables for ‘all’ hosts, and specify the local vCenter, site-specific names like the vSphere data centre and overall common settings for the VMware modules:
1 2 3 4 |
|
Here we authenticate to the vCenter using our central directory, so we use the logged-in ID of the person running the playbook as the VMware username. Alternatively, you can create a specific account with limited privileges in vSphere for Ansible to use.
Finally, we configure the VM details in the host variables file under
host_vars/
:
1 2 3 4 5 6 7 8 9 10 11 12 |
|
(VM folder names are prefixed with the data centre name followed by ‘/vm’. Note that in practice with this structure, one can define several VMs together in a list - e.g. within the group variables - but this is not necessary. In most cases, it’s probably cleaner to separate the VM configs by individual host.)
In the top level playbook, we also need to request the password for the vCenter user (or fetch it from a secure vault if using a specific account for Ansible):
1 2 3 4 5 6 7 8 9 10 |
|
We need to disable fact gathering as the hosts we’re creating may not exist yet so Ansible can’t connect to them.
Finally, we pull all these variables into a task defined within the
create-vm
role.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 |
|
The key thing here is that the task is delegated to ‘localhost’, i.e. the
Ansible control node, and therefore the connection to the vCenter to
create the VM will occur from the host where Ansible is run. (You can use
a different host such as a dedicated vSphere management server but Ansible
must be able to connect to it and it must have the Pyvmomi library
installed.) This task loops through the vmware_vms
list and creates each
VM defined there through the vCenter.
If you change the settings for any VM, Ansible will attempt to modify its configuration in vSphere if possible. For example, you can adjust the allocated memory in a running VM (assuming the guest OS supports it and hot-adding memory is enabled for the VM) but attempting to shrink a virtual disk returns an error.
Currently, due to Ansible bug #34105, vmware_guest
isn’t fully idempotent if you’re using distributed switches in your
vSphere networking configuration; the task will report ‘changed’ every
time it is run and you will see a “Reconfigure Virtual Machine” task
logged in the vCenter, even if no aspect of the VM has been altered.
(There’s a PR for this bug but it doesn’t appear to have been merged yet.)
If this concerns you, you can first run a vmware_guest_find
task to
search for the listed VMs in vCenter, register a variable and use the
result of that to drive the creation of any VMs that return ‘failed’ (see
my previous post on using multiple values in a registered variable).
Obviously, at this point you’d still need to power on the new VM and
install an OS on it. In fact, you’d probably instead deploy from a
pre-built template, using the template
and customization
parameters of
vmware_guest to configure it. The vmware_guest_powerstate
module could
then be used to power it up and initialise it, followed by
vmware_guest_tools_wait
to pause until it’s ready.
For reference, I’m using Fedora 28 with the current Darktable 2.6.0 release. I mainly shoot landscapes or architecture, so some of the following may not apply if you’re a portrait photographer.
I was fairly pleased with Darktable for processing my GX80 images - its extraction of detail is particularly good - but found that my results were sometimes a little low in overall contrast and colour images lacked saturation and richness. However, the new filmic module in 2.6.0 appears to mostly resolve these concerns. The other key requirement is ensuring DT has specific camera profile data available for the appropriate aspects of the image pipeline. With these criteria fulfilled, the results from DT can surpass what you can obtain with Snapseed (which in itself is otherwise an excellent app for quickly enhancing in-camera JPEGs).
$ dcp2icc 'Panasonic DMC-GX85 Camera L Monochrome.dcp' 5000
~/.config/darktable/color/in/
lensfun-update-data
to
download the latest camera and lens correction data files./usr/share/darktable/noiseprofiles.json
to a separate folder in your
home directory, edit it and change all occurrences of ‘GX85’ to ‘GX80’.
Then modify your Darktable menu entry or startup alias to run
darktable --noiseprofiles /path/to/modified/noiseprofiles.json
instead.One nit to watch out for with DT 2.6.0: if you use a non-standard window manager (such as fluxbox), you might encounter bug #12387. Should be fixed in 2.6.1.
]]>I left the car alongside the dual carriageway, parked in a scrubby bay shortly before the roundabout where the A49 broke free from a temporary embrace with its London-bound sibling to hasten south for the heady delights of Shrewsbury and the Marches. Clambering over a rough embankment thrown up to seal the Cherry Tree’s entropy off from the present day, I approached the building warily. Bare vines straggled over the walls and feebly attempted to tear down the former entrance porch. Entering would have been as simple as stepping through the maw of the french window, its vacant panes lying forlornly on the ground in front of it, but I had no intention of proceeding further in that direction without a hard hat and a hefty dose of YOLO. The sign above the front door was now blank wood apart from the lingering cursive of the word “Ales”, while the nameboard outside was now only a vacant frame topped with ornate ironwork and redundant spotlights.
In its heyday, the Cherry Tree had been called the Witch Ball Inn and boasted a swimming pool and dancehall out back, cementing its status as a roadhouse of the 1930s, those roadside hotel/pubs seeking to mark themselves out as worthwhile destinations in their own right. Sited close to the A41/A49 junction at Prees Heath, south of Whitchurch in Cheshire, the Witch Ball later picked up custom from nearby RAF Tilstock, apparently proving popular with visiting American airmen stationed there during the war (although whether the swimming pool was then open under the restrictions of wartime austerity is a matter for speculation). Perhaps done to advertise its location to this lofty market, the building still carried the word “HOTEL” painted in large white letters on the roof. It was this more than anything that had first caught my attention while flashing past on the opposite carriageway, an echo of a half-remembered sight from my childhood - hadn’t all such places picked themselves out in this way at one time?
Walking over the mossy tarmac of the former car park, with a pair of spindly, unkempt trees clinging to their small stony island in the middle, no trace of the pool was visible to the rear. A large extension suggested that the Cherry Tree had retained the dancing to the end though; a Friday night spot if you could find a mate willing to drive you all there or chance the breathalyser later.
Next door to the Cherry Tree is a corrugated tin shack garage business, its closed-up showroom housing vintage Morris Minors and an eclectic collection of old toys and antiques. This too is up for grabs, the owner apparently having retired. As I poked around the derelict pub, a car drew up outside the garage and a man briefly hopped out to peer inquisitively into the windows, perhaps weighing up a change of career or possibly just a plot of land that might prove lucrative in future.
Across the dual carriageway lies the Raven Inn, sporting the same mock Brewery Tudor style of the Cherry Tree but still open and serving in the dog days of 2018. It’s hard to escape the feeling that the Raven is gazing across into a mirror reflecting a prophecy of its own inevitable doom: architecture as memento mori, Ozymandias in half-timbered, half-intact black and white. Reviews on TripAdvisor are mixed, with the accommodation faring badly (“never in my life have I experienced a shambles like it”), although the restaurant has a few fans prepared to overlook what sounds like a rather forlorn interior (“see past the decor and you’ll have good food and friendly service”). Forty years ago, your father or grandfather would have made it the penultimate stop when returning from North Wales, to round out the day with a slap-up family meal in genteel surroundings enjoying “traditional cooking”. Now its clientele consists of those who unwittingly stumble over it while googling for “hotel near Whitchurch” ahead of a work trip and connoisseurs who know that a shabby establishment often belies a decent breakfast.
The Raven in fact has a much longer lineage than its Johnny-come-lately upstart over the road, being apparent on maps from the 1880s and probably much further back. It seems likely that it would originally have been a coaching inn, as the location must have been a useful staging post on the long ride either from London to Birkenhead or coming up from Herefordshire towards the north. The Witch Ball and its pool, a large oval behind the hotel, are present on the 1950s map and even the 1970s one. On the latter, the Raven is the “Wild Raven Inn” and has by this point assumed its present outline; most probably, it would have been rebuilt in competing style when the Witch Ball arrived on the scene to cash in on Britain’s nascent motoring boom.
What future now for these punctuation marks of 20th Century road travel? The motorways have abstracted all the long distance traffic, so only those who prefer the scenic route or have local deliveries to make are likely to be passing, and even then modern, reliable vehicles and convenient bypasses mean that you won’t require an overnight stop or proper meal to break your journey. It’s four hours from Cardiff to my home town of Warrington along the A49, and while I’m ready for a break after a drive of that length, I don’t need a full hotel dinner midway to keep me going (although I’m now thinking I should have a swift Coke in the Raven one day for posterity’s sake). Today, a location like this would normally merit a Pizza Hut or drive-in Macdonalds to catch the family trade coming back from shopping at weekends, but perhaps that market is already saturated. For the Cherry Tree, the war seems over, the building by all appearances too far gone to justify renovation even if its location could once again support two such hostelries. On the other side, the Raven overlooks a normally busy lorry park and weekend bikers meet, but they prefer the Midway Truck Stop just before it. The Raven’s own car park at least wasn’t empty when I passed through, and they’re clearly buggering on for now, but absent funds and the impetus to invest in it, its gentle decline will not be reversed any time soon.
“Porthcawl!”
“Spittal!”
“Berwick!”
“Harrogate!”
“Clitheroe!”
…Well actually, BB was going to say Aberystwyth, as for years we vaguely assumed they were unique to the promenade there, or at least to a certain type of mostly unpretentious Welsh resort - the scales and the forked tail putting us in mind of dragons rather than snakes, hence the mental association. (Despite the presence of Porthcawl in the list above, Bridgend Council actually stored one of their original benches during repair work and then somehow contrived to lose it. But then Bridgend Council appears to suffer from a debilitating mental block when it comes to benches.)
The other somewhat vague and indecisive thought was that someday, we might like to own one like it - until finally, in an idle moment’s Googling, we looked it up and discovered that bench ends like this actually originate on the other side of the country.
This is, in fact, one of the standard platform bench types employed by the North Eastern Railway, later subsumed into the LNER and subsequently the East Coast Main Line of BR. The LNER was never financially the strongest concern of the Big Four companies, particularly after the recession of the 20s and 30s hit the North Eastern coal traffic, and is widely recorded to have been on the verge of bankruptcy when the railways were taken into government control on the outbreak of WW2. During the fifties, rationalisation saw a lot of the intermediate stations and branches of the ECML closed, smaller places such as Alnwick and Seahouses losing their stations. At this time, local councils were able to acquire job lots of redundant former NER platform benches, hence their preponderance in Northumbrian towns such as Berwick (where they have since been maintained and renewed on a like-for-like basis). However, others remember them in situ before this period, so it seems likely that councils were already installing benches of this form anyway, and have continued to do so.
In fact, the pattern continues to be available in the current catalogue of the original foundry, the Ballantines Bo’ness Iron Works, and in those of their competitors. Ballantines went bust a few years ago but have since been bought out by a scion of the original family, so the manufacture of the serpent bench endures. Indeed, the pattern appears to have been an off-the-shelf option since Victorian times (note that the example above also retains the snake’s tongue, which is often reportedly lost as it is the most fragile part of the design). Today, new benches in this style are procured by councils from Logic. However, if you want one for your own garden, or at least a reproduction in non-corroding polyurethane, you can obtain one from Broxap. Warning: you might want a stiff drink to hand before following that link, it’s almost twice as much as BB has ever paid for a bench. Needless to say, an authentic, refurbished original could set you back at least twice that, although you can find the ends alone without slats for considerably less.
Credit to the contributors in the links above for their findings on this subject, particularly the posters in the LNER forum; BB’s own research effort amounts only to persistent Googling.
]]>The sets of users and keys are not identical; there may be more users without keys (a common situation, alas), and we may have many other keys belonging to users not in this list.
(You might instead store your users’ keys directly in a list or dictionary, which would obviate the need for the code below, but you’d forever be copy-and-pasting keys into a YAML data structure unless you have some automated way to keep it up to date. Come to that, I hear those crazy kids even store public keys in LDAP directories these days.)
First issue: a file lookup throws an error if the file doesn’t exist. We
could simply iterate over the list of users and set ignore_errors
so
that we pass on the ones that don’t have keys, then supply a null default
value instead:
1 2 3 4 5 6 |
|
But this is messy and longwinded, as we’re incurring a file lookup, even if it fails, and remote call for every user. (Does default even work with lookup? I have a feeling it may not…)
A naïve first pass at solving this might be to go through the list of users, call the stat module to see if a local key file exists for each one and save the results in a list, and then use a conditional test before trying the lookup for the key:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
|
(Remember that a registered variable for a looped task contains a list of
hashes for each item in the loop, comprising the original object named
item
and the task result for that object named after the module, in this
case stat
.)
We use local_action because we’re looking for the key files on the Ansible controller node rather than the client. This at least saves us some remote calls, as we only deploy actual keys that we find, rather than trying to do so for all the users. Here, we reduce the overhead of the stat module slightly by disabling the retrieval of file attributes that we don’t need, such as checksums. But we’re effectively iterating over the entire list of users twice, once for the users and again for their potential keys, many of which may not exist, which is slow and mostly wasted effort.
A better approach would be to iterate over only a list of keys that we know exist:
1 2 3 4 5 6 |
|
Here we pull out all the stat elements from the results list and then
select only the ones that have an exists
attribute which is true. This
may reduce the number of iterations considerably but it is still a loop
stepping through one item at a time, and we haven’t avoided doing all that
file I/O for the interminable stat lookups.
Ideally, we’d instead generate a listing of all the key files in a single pass (like an ‘ls’ of the directory), then take the intersection of that list with our list of users - i.e. to obtain the list of users for whom we have keys.
My first thought was to use the fileglob lookup to create a list of all the key files. However, fileglob returns the full paths for all the objects it finds. You might think it would be possible to use the Jinja2 map function to apply the basename filter to every element of the list, thus stripping the paths and leaving only the filenames:
1
|
|
But this doesn’t work for some reason that isn’t obvious to me; instead it breaks the filenames up into a list of single character strings like this:
1
|
|
However, the find module does give us a list we can filter in that way:
1 2 3 4 5 6 7 8 9 10 |
|
(Note that the find excludes
parameter, used here to remove editor backup
files from the results, is only available from Ansible 2.5. It
isn’t strictly necessary, as only the files that match actual usernames
will be used anyway. Alternatively, you could ensure that all your key
files are named username.pubkey instead, which is perhaps a bit more
intuitive, and then use patterns: '*.pubkey'
with find. But you’d have
to strip the extensions as well in the next step.)
(Instead of find, we could just run an ls command and process the
standard output with split
to make a list, or even shell out and call
echo dir/*
. But that’s spawning another process, which is cheating.)
Again, the find module is run locally as the key files are stored on our
Ansible controller node. We extract the path
attributes from the files
key in the return value of the find module, as those contain the path for
each file found, and then run basename over them to strip the directory
path.
Now we can obtain the intersection of our sets of users and key files with one simple Jinja2 filter, and deploy the precise set of keys that are we need:
1 2 3 4 5 6 7 8 9 10 11 |
|
This seems to me quite a neat pattern if you ever need to process a set of files according to some selective criteria or by cross-referencing against a second list. And it’s a lot quicker than watching a list of values scroll steadily up the screen.
]]>My preference for specifying these parameters is to gather them under a single dictionary. This seems like a logical organisation and avoids polluting the Ansible variable namespace with a large number of verbose names. For example, for PHP we might have the following in the role defaults:
1 2 3 4 5 6 7 8 |
|
Each of these settings relates to a parameter in php.ini, which is configured in the template php.ini.j2
, e.g.:
1
|
|
The problem with this method comes when the user wants to change one of those settings. Doing this:
1 2 |
|
…will change the setting for expose_php, yes. But it will also remove all
the other defaults, as we’ve now redefined the entire php_param
dictionary.
(Ansible doesn’t offer a way for variables to inherit or collate values from
multiple definitions at different levels transparently, so you can’t append or
override the values of individual keys in a dictionary simply by defining them
higher up.)
The solution is to allow the role user to specify their own settings in a separate dictionary and then use the combine filter to merge this with the defaults:
1 2 3 4 5 6 7 8 9 |
|
Here, values for matching keys in php_custom_param
will overwrite those
in php_param
and any additional keys will be appended to the original
dictionary. Just remember to keep referring to php_param
in the template
and not the custom parameters. (You might want to define php_param
in
the role vars directory instead, so the user can’t redefine it in a
possibly incompatible way.)
This is a useful pattern but in the case of PHP, we could extend it
further in the template. In its current form, adding support for changing
other PHP parameters or adding new ones would involve adding a new default
value to php_param
and editing the template to reference that value for
the appropriate setting. But if the keys in php_param
are assumed to
always be valid PHP settings (and you’re happy to let users alter
arbitrary settings), we might as well use them as-is for the parameter
name too (and loop through all of them in one pass):
1 2 3 |
|
Typically, PHP reads its configuration settings first from /etc/php.ini
and then from all the files in a defined configuration directory such as
/etc/php.d/
(this is how the standard PHP package in Red Hat/CentOS
behaves). Settings can be duplicated within all these files; the last
matching value is used, so settings in the php.d/
files can override
those in the main configuration. This means we can write our settings out
to an override file and leave the installed configuration file untouched:
1 2 3 |
|
(The example here gives the destination configuration file a name that should ensure it is parsed last by PHP so that none of its settings are overridden. You might want to make this a configurable variable, so that users can change its precedence in the parsing order.)
Note that this method works for Red Hat/CentOS platforms but will need some adaptation for Debian-based distributions as those use different configuration directory paths depending on the context (SAPI) in which PHP is called.
]]>Alas not, because technology waits for no man. When I built the file server originally, SATA hard drives (which had only just replaced ye olde IDE) had 512 byte blocks. The ones you buy today have a 4KB block size. And, inevitably, you can’t mix 512 and 4K drives in the same ZFS virtual device (vdev). So whereas previously, expanding the mirror would have been a relatively simple matter of detaching one drive at a time, swapping a larger one in and then resyncing the data, attempting to attach a 4K drive to a vdev originally built on 512 byte drives will return an error. (How this process should work incidentally, if a neophyte user were ever to be capable of doing it, would be that one simply pulls a drive and swaps it, the server immediately starts rebuilding the mirror and sends a notification email/tweet/text when done, and then the user swaps the other drive. But then humanity wouldn’t need sysadmins or IT-literate relatives.)
So the strategy now would be to create an entirely new pool in ZFS with the 4K drives and copy the data over manually from the existing pool. As my NAS doesn’t have any free slots, I had to do this one drive at a time. As a further complication, my mirrored pool also has a separate intent log device (SLOG), which is also mirrored for resilience. Here’s the layout:
NAME STATE READ WRITE CKSUM
datapool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
c2t1d0 ONLINE 0 0 0
c3t1d0 ONLINE 0 0 0
logs
mirror-1 ONLINE 0 0 0
c2t0d0s3 ONLINE 0 0 0
c3t0d0s3 ONLINE 0 0 0
And here’s the play (you can tell I rewatched The Avengers recently, right?):
zfs get all ...
) for each filesystem in the
pool - and don’t back this up to a folder in the pool. Make sure you can
identify each physical drive & slot with its associated device name.To be sure, temporarily disable any services that use storage on the pool. E.g.:
svcadm disable -t nfs/server
In my case, this meant nfs/server, smb/server, a Squid instance and a Serviio UPnP service. (If you have other pools that are also remotely shared and in use, you’ll need to unshare the affected filesystems instead.)
Take a recursive snapshot of the pool:
zfs snapshot -r datapool@snap1
(I actually used the datestamp rather than ‘snap1’ for the snapshot name.) Any data on the pool now changed after this snapshot will not be migrated, unless you take further incremental snapshots.
Detach one drive from the pool:
zpool detach datapool c3t1d0
Create a new ZFS pool from the new drive:
zpool create datapool2 c3t1d0
You may want to confirm the block size chosen for the new pool, although for most recent drives and Illumos releases, it should be correct. The command is:
zdb | egrep 'name|ashift'
For 4K blocks, look for an ashift value of 12 below the pool (9 for 512 byte blocks).
Send the snapshot of the existing pool to the new pool:
zfs send -RLce datapool@snap1 | zfs recv -Fd datapool2
(You need the -R; -Lce are optional parameters for Illumos/OpenIndiana that will improve space efficiency. On the receiving side, -d is necessary to strip the existing pool name as a prefix from the filesystems.) If you have it installed or can compile it, mbuffer is useful in the middle of this pipeline to speed things up by buffering the I/O.
Best to run this within a screen session or similar, as it could take several hours (over eight in my case, without mbuffer).
At this point, I removed the log device from the original pool:
zpool remove datapool mirror-1
Assuming you have successfully stopped all activity on the original pool, you should be able to export both it and the new pool:
zpool export datapool
zpool export datapool2
Re-import the new pool with the correct (original) pool name:
zpool import datapool2 datapool
Check that your filesystems are mounted correctly again:
zfs list
Boot up and attach the new drive to the pool:
zpool attach datapool c3t1d0
At this point, I readded the mirrored log device:
zpool add -f datapool log mirror c2t0d0s3 c3t0d0s3
Do not, and I mean DO NOT, omit the keyword “log” from this command, as I did on my first attempt. It won’t do what you mean; instead it will concatenate the extra devices to the existing mirror and you will be unable to undo this mistake as you can’t remove non-redundant devices from a vdev. The only solution will be to break the mirror again and repeat the entire process with the released drive forming another new pool. Yes, that’s what happened here and there was much wailing and gnashing of teeth to accompany it.
Although they’re not making a huge fuss of it compared to the upcoming release of Avengers: Infinity Whoar, this year marks a decade of the Marvel Cinematic Universe, from its initial conception as a gleam in Nick Fury’s one good eye, in a post-credits sting tacked on to 2008’s Iron Man. So just like Marvel and, seemingly, everyone else, I thought I’d be totally original and do a list of all the films rated definitively and non-derivatively according to my own very personal criteria, which will almost certainly be completely unlike all the other lists because there’s a lot of scope for major disagreement when sizing up the relative merits of, say, The Avengers and The Incredible Hulk. (That last sentence, incidentally, was originally supposed to be a subplot in Age Of Ultron but got cut for being too wordy and syllable-ly at the studio’s insistence, like virtually everything else that made sense in the script.)
“These are the Cubans, baby. This is the Cohibas, the Montecristos.”
Basically, if you stuck to just these films, you’d be less inclined to acquiesce to the most common flaws levelled at the MCU - that the films are just a CGI smashfest, they’re formulaic, the stakes are always too abstract to seem important and that they’re “unrealistic” (what the hell? yeah, so’s your pose).
Iron Man: You tend to forget, until rewatching, how much more visceral and grounded the first film in the series was compared to everything that came afterwards. (Only The Winter Soldier would qualify as similar, if they’d had the stones to properly kill off Fury and a few more people in the first reel.) Within the opening five minutes, all but one of the characters we meet is dead and Tony Stark is lying on the ground bleeding out. The context to this is then shown in flashback, interleaved with his nightmarish experience in the cave alongside Yinsen - who after inspiring Stark to turn things around subsequently bites the dust after revealing that the family he always planned to “see again” were already dead. The first half ruminates on terrorism and the arms trade - Iron Man’s first mission proper is purely about wrathful vengeance against a guerilla group about to execute a bunch of innocent villagers, and is basically wish fulfilment for a nation that had already made a spectacular mess of similar situations in the real world. From this point on, almost no human bad guys would die as graphically or definitely as this, which still makes IM feel refreshingly different to its brethren.
Plus you get Stark, Pepper Potts (Gwyneth Paltrow much smarter and more appealing here than as a ghastly snake-oil saleswoman), Phil Coulson, JARVIS, the initial suit tests and the beginnings of the MCU’s obsession with smashing up cars as shorthand for indiscriminate destruction (yeah, I geddit - there are plenty of disposable vehicles going spare for FX shots). What’s not to love?
Captain America: The Winter Soldier: It’s still this or the one below that ranks as my all-time favourite MCU movie, depending on mood. By keeping the focus tight, the theme clear and throwing in some of the most tense action sequences ever made (the ambush on Nick Fury literally had me chewing my knuckles throughout), the Russo brothers demonstrated there was no genre that could not be successfully co-opted to bring something new to the table. And Steve Rogers is still my favourite Avenger, which is going to make the upcoming expiry of Chris Evans’s contract even harder to come to terms with. Always the least morally compromised of the team, here he takes a clear line right the way through the movie, displaying qualms about ‘Project Insight’ from the get-go that even have Fury pausing to reconsider (too late but…), and ultimately insisting on bringing down the organisation where he started out to end the rot. Putting him together with Black Widow, his polar opposite, and then having them find common ground was a stroke of genius. After that, Anthony Mackie’s effortlessly engaging Sam Wilson is just gravy. (There’s also Sharon Carter, who could at least have been given more Hydra guys to take down; if we can’t have any more Peggy, let us keep her niece.) Plus, you know the bad guy is truly evil when he’ll even shoot his cleaning lady in cold blood.
The fact that TWS stands out so clearly from the rest of the pack for almost all MCU fans through its unique setting and plot should really give Marvel pause before they fire another director over differences of vision.
The Avengers: Joss Whedon’s initial team-up movie to close Phase 1 gets so much right - the pacing, the dialogue, the final third - that you wonder why people keep trying (and manifestly failing) to invent a better wheel on further team outings, including Whedon’s own bosses on his subsequent effort. Sure, it’s all about the big coming together in the closing battle, a rare example of a hotly-anticipated climax that doesn’t disappoint in any way, but the film has a lot of fun getting there too. The right characters meet and interact in exactly the ways you want them to and even before Stark has his moment of self-revelation to cue up the final showdown with Loki (“He wants his name in lights…son of a bitch!”), the audience feels satisfied with how things have turned out.
What’s more, this may be the only MCU film that has exactly the right running length. (Incidentally, has anyone yet figured out what they’re ‘Avenging’? Notwithstanding the clunky Stark line shoehorned in to justify the team name, it sounds like they’ll only pop up after everything is laid waste to cry, “OK you bastard, now you’re for it!”)
Iron Man 3: Having got the whole thing moving, it’s fitting that IM3 also sets the template for the solo movie series: a trilogy whose final part caps the arc for the character in question without closing off further appearances. It’s a fair assumption that no matter how long Downey Jr’s contract runs, this will be the last solo Iron Man film unless they reboot the lead Avenger with a new character inside the suit. Thank goodness then that it’s a hoot from start to finish, director Shane Black rigorously upending all the conventions and continually subverting expectations, aided by a joyful Ben Kingsley cameo-within-a-cameo. Avenging was never such fun again…
Thor: Ragnarok: …Until this latest and, in line with normal practice up to now, presumably closing chapter in Thor’s saga. By tossing in the Hulk, a character it’s otherwise hard to imagine working in a solo film, keeping things light (and fleet of foot) and being prepared to kill off most of the Asgardian canon, Waititi shows that a first time director can still stand out on the Marvel conveyor belt. On the downside, it ditches Jane Foster with a mere line of dialogue and most of the compensatory gains (Valkyrie, Korg, Thor’s temporarily patched relationship with Loki) will, I fear, be sidelined if not wastefully discarded in the first thirty minutes of Infinity War.
Regardless, the scene where the God of Thunder leaps down to fight the undead hordes on the bridge while Led Zep pounds out on the soundtrack is waaaayyy cool. (But does anyone understand why Thor’s plan to prevent Ragnarok involves putting Surtur’s helmet in the same room as the Eternal Flame - it’s not even chained down - other than “because plot”?)
The mid-tier MCU films are all chock-full of great moments and some amazing film-making but, like any proper superhero, each one is compromised just sufficiently to hold it back from being one of the best, usually by the lack of a decent story to tie all these bits together. All you can ask is that they fail in interesting ways. I like all these films; I just don’t rate them as highly against the top line.
Black Panther: If it were down to just the visual design and Marvel’s unerring gift for casting likeable actors in the main roles, BP would be straight into the first tier. Only the film’s by now identikit plot outline and story beats dings it. (If it’s in your top tier for representation alone though, I’d say that’s a fine and admirable choice.)
The Avengers: Age Of Ultron: After the inarguable success of the first Avengers outing, where did it all go wrong for Whedon and Marvel? Even the opening fight sequence, as good as the individual air-punch moments are, feels perfunctory, like goodwill still to be earned is already being taken for granted. Perhaps that’s just the hazard with a film that was always intended to be the comedown, sowing the seeds for the eventual dissolution of the central team - but it doesn’t help when you also lumber it with introducing four new characters (if you include Ulysses Klaue), only two of whom ultimately matter, and a determination to make an already big event movie even bigger at the expense of story. To his credit, Whedon tries hard not to repeat himself - witness his revised take on the classic 360 degree fight scene towards the climax. But for all the great Moments(tm) here, an embarrassment of riches in almost non-stop procession, Ultron just feels messy.
Captain America: Civil War: Whereas Ultron tries to cram too much story and too many characters into too short a running time, CW pares things back and has the appropriate duration to accommodate what’s left - and yet feels too long. Having the big airport face-off two thirds of the way in is a brave way to shake up the usual routine, but it makes the remaining conflict between Stark and Rogers feel inconsequential. (I don’t know what the answer is to this conundrum, other than simply taking out more stuff - but let’s face it, we’re all waiting for Infinity War to do the opposite. Incidentally, I’m betting that for all the hype and anticipation, the latter lands solidly in the mid-tier, a case of being careful what you wish for, even though it’s been pretty much ordained since the first Guardians flick. Perhaps Avengers 4 will correct this by focusing on a reduced line-up again and showing the beginnings of a revised frontline team.)
CW isn’t really a Captain America solo film, so in that sense it doesn’t function as the end of a trilogy like Thor or IM 3. It’s arguably more of a Bucky film than the titular previous film, except the latter remains the least charismatic, or anyway least knowable, superhero. Conceptually, it exists more to rip apart the cracks exposed in Ultron, which is perhaps why it’s less satisfying than the Russo’s previous film. The storyline comes over more like an internal Avengers HR issue than a credible threat and ironically, if anyone here represents the everyman who doesn’t want a building dropped on their head, it’s Zemo. (And since you asked, Cap’s clearly in the wrong albeit for understandable reasons and only having Stark be massively obnoxious all the way through can disguise this.)
On the plus side: Black Panther, the only character to reach some level of mature reflection over the course of the film and thus emerge with dignity (if anything, Cap goes the other way); and more Sharon Carter, sadly still functioning as a plot MacGuffin instead of a fully-fledged partner, under the “No sex please, we’re Marvel” policy. But then, who could ever replace Sam?
It has Spiderman too, and there’s nothing wrong with that but he kinda feels superfluous given all the other MCU riches by this point.
Captain America: The First Avenger: As a period pastiche, this is great. Top cast as ever, including the incomparable Hayley Atwell, and some Young Howard Stark to be thankful for. No wonder they made a TV spin-off. Steve Rogers is so gosh-darn wholesome that he can seem a bit vanilla, so Marvel always wisely surround him with charismatic figures played by top notch people like Tommy Lee Jones and Stanley Tucci. It never quite feels like we’re playing for real stakes (we know who lost the war, although at least the followup made us wonder for a time if it wasn’t Hydra), and it moves too quickly from Cap’s first triumph to the final showdown with the (underused) Red Skull. But as an origin story, this is a strong second to Iron Man.
Thor: It’s fun, it’s different again in feel, it has Hiddleston and Hopkins. I’ve read opinions that the Asgard scenes don’t work but the Earth ones do, and opinions that say the precise opposite, so possibly the truth is that they make an effective contrast. It’ll never be anyone’s favourite film, apart from Thor/Loki/Jane/Darcy diehards, but that’s OK too.
Thor: The Dark World: Thor 2 is also a lotta fun, probably more than you remember unless you’ve watched it again recently. There’s a gaping hole in the middle to match the darkness Malekith wants to restore, but there are enough Moments here, especially in the Thor/Loki bickering, to make it worth your while. (And don’t forget, Loki actually saves Jane - apparently out of instinct rather than intent - at one point.)
Guardians Of The Galaxy: GotG was rightly lauded on release for successfully selling a mainstream audience on MCU’s most farfetched line-up to date (what, even more so than an all-black lead cast?). It has tremendous panache, of course - enough to make you overlook the underserved treatment of the female characters (again). Tonally, it was so detached from the rest of the MCU at this point (until Thor 3 came along), that it’s hard to think of it as belonging to the same series, and for that reason, despite there being nothing I dislike about it per se, it doesn’t make my top tier.
Guardians Of The Galaxy Vol. 2: The second Guardians film, like Ultron, tries hard to bottle the same magic as the first while shaking the characters up a bit, and once again fails to completely satisfy with either aim. I think we can partly pin this one on Marvel’s Villain Problem.
Ant-Man: Another fun solo outing that doesn’t hold any significance in the larger universe (and why should it?). The low-key appearance of Hope Van Dyne is frustrating but we know from the post-credits that will be rectified in the next solo movie so I can forgive stringing it out. In Hank Pym, we have an effective mirror and replacement for Howard Stark. And the family aspect keeps it more grounded than the fantastical entries around it. What’s slightly concerning is that the most novel parts - Luis’s enthusiastic montage expositions - were probably left over from Edgar Wright’s abandoned initial script and thus not an ongoing highlight.
Spider-Man: Homecoming: Honestly, it’s a good movie. I’m just not that into you, Peter.
Doctor Strange: I haven’t seen DS since its cinema release and I should rectify that. The problem is, I have no urge to. As a character, Strange is hard to like and, for me at least, putting Sherlock wrongmo Cumberbatch in the title role doesn’t make that any easier. Storywise it’s a bog-standard origin tale of the kind we thought had been left behind - perhaps Marvel were so concerned that the audience might have difficulty swallowing the whole concept of magic, they deliberately kept every other aspect of the film as unsurprising as possible. So: standard hero journey; standard, non-significant SO; standard villain and mentor, standard resolution. The effects are trippy and it has early Floyd on the soundtrack so I should love it. But I don’t.
Iron Man 2: Watch it for the introduction to Black Widow in action. Of the rest, the action scenes are enjoyable enough but Tony Stark the self-pitying mess gets old fast, and the movie is stuck in that place for too much of the running time.
The Incredible Is This Even An MCU Film At All: OK, Ed Norton has a really interesting, tortured take on Banner compared to Ruffalo’s later portrayal, but that’s it. Skip this unless it’s late, there’s nothing else on, and you don’t have any plans for the morning.
Quite a few people are on an #MCUrewatch, #RoadToInfinityWar jag at present, and it’s interesting to compare notes. Most people seem to put Winter Soldier at the top (and almost nobody has time for The Incredible Hulk), but some people rate Civil War very highly, think Thor 2 is better than Ultron or even (mad buggers) put IM2 above IM3. Also, some folk have a much sharper take on the problematic parts of the MCU than us average white guys, and those voices deserve a hearing.
]]>There was an impressive turnout for a midday event, and the auditorium was almost full. David Hurn in person was enthusiastic, amusing, enlightening and exactly light enough for a lunchtime interlude without patronising his audience or neglecting to inform. His talk was entitled “Collecting Bus Tickets”, in a conscious effort to demystify and play down his achievement. The accompanying slideshow picked out highlights from his collection of work by other famous photographers gained by his habit of exchanging prints, or else featured key images from his lifetime - including the endoscopy of his bowel, included to demonstrate that we can’t be too precious about photography and because it had saved his life by pinpointing the exact location of a cancerous tumour that was successfully removed by surgeons a few years ago. Another memorable highlight was the image of the Soviet soldier buying his wife a hat in Picture Post, which changed David’s life and set him on his eventual path because of the moving parallel he drew with a moment from his own parents’ lives.
However, I did feel that a couple of his observations were off base. On a minor note, his opening remarks that “the family album” was the most important possession for most people probably betrays a traditional understanding of domestic photography that no longer holds true and arguably hasn’t for some time. I can take his point that family photos may hold enormous sentimental value, but the notion of them still gathered physically together into a canonical book seems a little quaint today. I have albums of pictures from one or two notable holidays and a large one holding photos of the first year or so of my eldest daughter’s life - plus several hundred more of both my children still loose in the sleeves from the lab, which will most likely now never make it into an album unless I get bored in my retirement. In the event of a fire, I’d struggle to put my hands on any of them and it probably wouldn’t occur to me to try, nor the folders of negatives and slides, the backup disks or, most critically, the fileserver hosting all the raw image files of the last fifteen years. I’d be saddened by the loss of this archive of my life, but at least I’d know the important images themselves were still available to view via an Internet connection - held by Google and Flickr (probably Facebook for most other people). The family album, in its traditional sense - has that truly existed for more than a minority of families after about 1990, let alone in the age of smartphones and cloud storage?
The other remarks that similarly failed to recognise how the world has moved on were his slightly frustrated complaints that “photography students don’t swap prints”, as he has always done, aimed mainly at the many students from the local arts courses in the audience that day. “Are you really not interested in the work of your fellow students?” he asked plaintively. I think this viewpoint privileges two possibly outdated concepts: the primacy of the print; and possession of the physical representation of a piece of visual art. I’ll caveat here that I don’t know how photography students work today and whether they print a lot. But the notion that they have no interest or sight of the work of their contemporaries must be preposterous; I’d challenge you to find any generation more aware of the photographic exploits of their peers. They may not be swapping actual prints, but they are certainly liking each others images on Instagram, pinning them on Pinterest, curating them on Tumblr and building up an entire linked ecosystem of the Commons based on their own images and those of their friends. How much photography students do this with each other’s work, I’m not sure, but I know for a fact that USW students at least are following and retweeting their mutual accomplishments on Twitter. It may not ever result in another tangible personal collection as valuable and important as Hurn’s own “Swaps”, but it is nearly all publically and globally accessible for those looking.
Incidentally, do you find yourself, like me, pinning endless wonderful images on Pinterest but never actually going back to review and enjoy your boards? You’re not alone - David confessed that most of the prints he received, other than the ones he elected to hang on his walls, went into sleeves and archival boxes, and were never viewed again until now. If nothing else, one enormous benefit of David’s generous bequest to the museum will be the opportunity for many more people to assess these images as a group and draw new connections between them and the development of photography as a whole in future. The first fruits of this process will be the museum’s next exhibition once Swaps is over next Spring, highlighting the work of female photographers. David himself will be in conversation with Martin Parr (who has just opened his own foundation in Bristol to further the artform) at the museum in February, an event I unfortunately won’t be able to make but which will be a great evening for those who can.
]]>If you recall my previous post on registering multiple results in a variable, you’ll remember I had a YAML data structure that described the set of database instances on a host:
1 2 3 4 5 6 7 |
|
This is the same data structure, but with an extra field shown: the ‘uid’ value is the hardcoded user ID of the instance user account (which is named after the instance). Each database instance runs under a dedicated account (to which we apply the project resource limits previously covered). So on the host, we create user accounts based on the information for each instance:
1 2 3 4 5 6 |
|
This is fine, but each of these hosts is actually a Solaris virtual zone running on one of a set of physical, clustered nodes. In Solaris by default, listing the processes in the global zone (the top level, ‘parent’ OS instance for the physical server) also shows all the processes running in the local zones attached to that node but crucially, the owning user IDs for those processes will be displayed numerically because those users don’t exist in the global zone’s namespace. The database administrators find this confusing and ugly; to fix it, we’ll need to create the same accounts with the same UIDs (for all the instances on all hosts) in the global zones. (Note that each instance UID is unique across the entire cluster.)
To achieve this in Ansible, we’ll need to access the db_instances
data
for all the database hosts, but for use on a different set of hosts.
Normally, host variable data is considered specific to that host. My first
thought was to reference it via the special hostvars
dictionary. That
turned out to be a non-starter, since I’d need to loop through all the
instances within the hostvars entries for all the relevant hosts, within a
third loop applying that to each global zone. Ansible’s
task loops are quite
extensive but the more complex ones, such as with_nested
, operate
on separate lists rather than dictionaries. Both the structures in question
here are of the latter form, and the elements we want from hostvars are
nested. with_subelements
can sort of handle this, but not the
outermost loop as well. (This is a good reason to sometimes prefer lists
for structured data even when a dictionary feels more appropriate - there
are more methods available to parse and iterate lists than dictionaries.)
It was then that I discovered playbook delegation, which allows you to run a task intended for the current host in play against a different host altogether. (One of the things I really like about Ansible is that one can always find a suitable filter or module to achieve even quite complex tasks that initially appear insurmountable. Studying the documentation in detail helps, but quite often some facets are only referenced in forum examples and ServerFault answers.)
The example use for delegation given in the manual is to update a central load balancer configuration when a backend node is added or removed. However, I can use it here to run the user creation task against the global zones as well as the database host zone. Here’s another task to do this:
1 2 3 4 5 6 7 8 9 10 11 12 13 |
|
It’s critical to note that this task is run against each database host.
global_zones
is a list of the hostnames for all the nodes; it’s actually
taken from an existing Ansible host group, extracted from the groups
dictionary. Within a nested loop, we delegate the user creation task to
each global zone in turn, looping through the database instances defined
for the host and creating ‘dummy’ accounts based on the instance name and
assigned UID as before. These dummy accounts have an invalid shell (and
could also share a common home directory), since they shouldn’t be used
for login or program execution on the global zones; they’re purely to give
the ps
command something to map numeric UIDs against sensibly.
We loop over the defined instances by extracting the names of the keys
from the db_instances
dictionary; the keys()
operator does this (and
is a good example of a feature that is discussed in forums but
not covered in the Ansible documentation, since it’s actually a
Python dict method
inherited from the underlying implementation). Within the loop, we can
then use this instance name (assigned to item.1
) to look up the uid
value in the dictionary.
This solution works well, but one aspect of it makes me slightly uneasy.
The use of delegation appears to break the (informal) contract with the
Ansible user that a role is normally expected to operate (i.e. change
state) only on the host(s) to which it is applied. In this case, we apply
the role containing the task to one set of hosts (the database zones) but
it actually creates users on a second, unrelated set of hosts (the global
zones). True, the global_zones
list has to be explicitly defined and
passed across, but it feels somewhat non-intuitive. Additionally, a
limited set of hosts specified via the --limit
option to
ansible-playbook
will not work as might be expected in this case, since
it won’t affect the global_zones
list. (I did try using the intersection
of this list with the ansible_play_hosts
list, but the latter only
contains the name of the host currently executing the role at any one time
rather than all the hosts in scope; we need an ansible_limit_hosts
magic
variable too.)
In Ansible terms, it would be more typical if the role were applied to the global zones and referenced data from the database zones - but that gets us back to trawling through hostvars (oh, if only I hadn’t chosen a dictionary!) What I’ve done here isn’t against the letter of our coding standard, but it arguably violates the spirit of the standard (which is that tasks and roles should be as simple and transparent in their use as possible) significantly. (As I don’t kowtow religiously to that spirit anyway, I settled for adding a bold warning to the README - caveat emptor!)
]]>The Sinclair ZX Spectrum came out in 1982, just as I turned 12. I’d been aware of home computers since at least the appearance of the ZX80 two years earlier, and can recall playing about with the ZX81 and similar microcomputers in stores: “Playtime’s over, son,” rumbled the salesman in Dixons, flipping the power switch as I sat crosslegged in front of their demo VIC-20 about to start my second round of Galaxians. A week previously, he’d been happy enough to give me his full patter on it, but he was clearly now aggrieved that I hadn’t yet nagged a parent into buying one - for the very good reason that they simply dismissed it as too expensive, which it was at £200 (and it needed its own proprietary tape deck). But yes, I wanted one. How badly? I’d already hung around with some of the geekier lads in the year above, playing with the school’s Commodore PET 2001 in the classroom during breaks. One of the Chemistry teachers owned a Sharp microcomputer, which was a rare possession at the time, and he let a select few of us tentatively prod at it during a couple of lunch hours, although we had to endure some formalised teaching of programming from him (while we mainly wanted to make these new wonder machines print silly messages or rude words). Those early encounters had led to borrowing a paperback that covered basic BASIC programming from my local library, which I studied closely. Following on from an existing interest in electronics, this was a natural progression and yet pleasantly diverting: the application of rigorous logic within an entirely conceptual frame. But I still lacked a ready means of trying any of it out.
In the meantime, I obsessively devoured the burgeoning hobby press for this growing interest - like most hobbies, debating and comparing the various ways to spend money, even if only in your own mind, was half the fun. The magazines were odd compendia of product news, reviews, tips and tutorial articles at the time, each covering a smorgasbord of micro models and companies ranging from well-known international brands to local independent retailers and one man band software sellers. As no one system had yet gained significant market share, they couldn’t favour particular products so had to write about the entire gallimaufry of competing brands and every quirky little plastic box on sale (and quite a few that, while well-known, weren’t available yet and actually would never appear). Your Computer magazine, an entirely representative example of the milieu (you can find scans of old editions online), ran a series on porting programs between various BASIC dialects, alongside pieces on ZX81 machine code, the month-by-month development of a database program for the BBC Micro and several pages of type-in programs, including reams of hex numbers for the machine code ones. It seems faintly incredible now that people used to sit at their cherished devices tapping in streams of opaque hieroglyphs, and further hours trying fruitlessly to identify the mistakes in them (some of which could be down to not merely a miskeyed entry but actual printing errors, meaning they had almost no hope of ever making it work). One featured program was for a flight simulator which, over the course of a very long evening, my friend, his father and I could not get to work on their borrowed Acorn Atom - but we stuck at it because we really wanted to see that flight sim.
There were multipage promotions from the major makers, extolling the virtues of their particular models, and sincere price promises from the newly minted Thatcherite wide boys hastily opening slick retail premises to shift boxes (although if you didn’t have the readies to hand then of course, “playtime’s over, son”), alongside quarter page ads placed by the archetypal lone bedroom ‘entrepreneurs’ bashing out ten “puzzle games” on a C30 cassette that faithfully promised a world of entertainment from some fairly rudimentary text handling. One company urged the new ZX81 owner to “stop playing boring games” and use their advanced new technology to run a “database filing system” for sheer unadulterated thrills instead. But part of what the magazines had uncovered was the enormous latent demand by ordinary people to own multifunctional devices that they could direct to their own purposes - to do things that only the malleability of software allows (today we have vast app stores and composable web services, and you don’t need to learn how to code). Clearly, nobody in their right mind would want to run a database off cassette-based storage, yet individuals were beginning to do that in preference to tedious or unworkable manual solutions. This was a step beyond previous consumer product booms such as hifi, although it had something in common with the DIY craze that arose in the early seventies.
‘I just had this instinctive feeling that nothing this cool could be useless.’ (Guy Kewney)
Over time, this charming amateurishness gave way to a wide range of titles each targeting both a specific system and a particular readership. The market for ‘thinly aimed at teenage boys’ was especially large; of magazines such as “Load Runner”, “Big K” and “Crash”, only the first named was upfront enough to call itself a computer comic. Those of us who read too many of these publications had a distressing tendency to believe that talking like a computer - dropping words like ‘input’ and ‘output’ into normal conversation - did not somehow beg for derision. Even the prevailing use of BASIC (boo, hiss) in the program listings and tutorials lent an undercurrent of exotic Americanisation and an unfamiliar entrepreneurial spirit with the liberal use of the dollar sign and ruthless categorisation of data as ‘strings’ or ‘integers’.
Yet still the entry price for hopping onboard this craze appeared to start
upwards of two hundred pounds (at least if you wanted to play - or, being
generous, write - games in colour). Until May 1982, that is,
which happily coincided with my twelfth birthday.
A 16K ZX Spectrum at £125 was just about within the realms of
parental indulgence, if I played the usual ploys of “I’ll have it for
birthday and Christmas!” and “I’ll never ask for anything again!”
[Narrator: “Neither of these were true.”] It took seemingly forever to
arrive, though it might actually have been anything from six weeks to
three months (Sinclair had massive shipping delays at first, and I suspect
it also took a month or two initially to reassure my dad that his money
wouldn’t just disappear).
I filled the time covering pages of notepads with my own fledgling BASIC
programs (partly because I still hadn’t grasped the use of FOR..NEXT
and
was thus unrolling all my loops in long hand) and then, when the coveted
Spectrum still hadn’t arrived, I resorted to ‘entering’ them on the full
page photo in the Sinclair brochure “to practice my typing”. I was
jonesing for my fix bad. Around this time, I recall buying the
August 1982 issue of Your Computer as “beach reading” (sic) for a day out
at Rhyl, to satiate my gnawing impatience at the continued wait. (Why yes,
I was a pale and slight adolescent forever getting sand kicked in my
face.)
While I was waiting for my personal holy grail to arrive, an entire panoply of low priced, ‘fully-featured’ competing microcomputer models flooded on to the market to exploit the demand newly exposed by Sinclair’s latest success. I’m sure that at least a few owners of the Video Genie, Newbrain, Oric-1, Jupiter Ace or Atari 400 were pleased with their purchases, but the companies behind them might as well have farted into a paper bag for all the lasting impact they would have on the home computing scene in the UK. By the end of the year, it was clear that there were only three main players in town - Sinclair, Commodore and Acorn - and the nascent software industry necessarily coalesced around them. (Jupiter devoted most of their adverts to justifying the novel use of FORTH in the Ace, as if writing code rather than running it was the main selling point. Nevertheless, the impression persisted that if your parents bought you one of these esoteric outliers, they probably also ate muesli and banned you from watching ITV.)
Eventually, my ZX Spectrum arrived. I can’t recall exactly when it came or how it felt, but I suspect no first time heroin shooter ever matched the rush of that day. I wrote my little programs and I played the sample “Breakout” game supplied by Sinclair on their demo cassette endlessly. (I once wrote an entire database program in a morning - no idea why other than because I could - and then couldn’t save it because the tape recorder was faulty. And my mother agonised and then decided not to tell me about the brand new cassette deck hidden upstairs for Xmas.) A teacher and fellow enthusiast at school arranged a trip to a regional computer show for some of us, and for the princely fiver I had scraped together (or possibly cadged), I bought my first game, Artic’s Gobbleman, an early Pacman clone that at least hadn’t been written in BASIC and was thus a ‘proper’ program. But in slowly collecting more and more games, I rapidly discovered the limitations of my basic Spectrum.
‘When you’ve got a computer and you want to make something, you can do it. You’ve got everything there. And I loved the fact that I could have an idea and I could immediately put it into action.’
Tom Lean makes the excellent point that computing offered a hobby which was largely self-contained and theoretically less prone to ‘acquisition syndrome’ than most others. Once you had the computer, almost everything you could want to do with it was immediately possible, only requiring your imagination, time and intellectual nous. I was a failed veteran of both model railwaying and DIY electronics already (shockingly predictable, I know), pursuits that each demanded endless supplies of ‘stuff you don’t already own’ before you could achieve much.1 But here at last was a gizmo that was complete in and of itself, that only needed the application of logic and creative insight to conjure anything I could think of into existence. Oh, and some extra RAM, because it quickly turned out there weren’t many games that worked in only 16K. Oh, and a joystick. And a joystick interface (because the Spectrum hardware was really minimal). Actually, an endless succession of joysticks (surely there must be one that will let me win at games?!). And a Microdrive now they’re available. And maybe a proper keyboard. Or a printer. And an Assembler program, because I’m bored with BASIC after writing half a dozen ropey games and I need to learn Z80 machine code to be a games developer. Although actually, Z80 machine code is reeelly hard, but it’ll be much easier on an entirely new computer with better graphics and sound, Dad. (I’ve no idea how I ever perpetrated this last outrageous con to the tune of a £200 Commodore 64 - rising Eighties middle class affluence innit - but I went on to repeat it twice more, although at least I used my own wages and only fooled myself on those later occasions.)
In today’s era of compiled high level languages, Object Orientation, IDEs, frameworks, toolkits and modern 64 bit platforms, we’ve largely forgotten how compromised and arcane early microcomputer architectures were. Clever hacks were creatively employed, and I’m sure for solid reasons, to produce the capabilities that the market demanded from low end hardware, but none of them made life easy for assembly language programmers looking to hit the bare metal to turn out arcade quality games. The ZX Spectrum’s display memory map was laid out in three sections of eight character rows, each comprising the top row of pixels for each successive row of characters in that section of screen, then the next row of pixels for each character row, etc. (This was quite apart from the colour layout, which was a sequential array of character squares superimposed on the hi-res screen - hence the famous ‘attribute clash’ when objects of differing colours met onscreen, since the character space they shared could only have one assigned foreground colour.) So that’s a screen split into thirds, each of which is split again into segmented rows of characters. Imagine trying to animate a game object moving vertically through that space, one pixel row at a time - in Z80 assembly! In fact, there’s some nifty hexadecimal maths you can use to do this, but there’s no doubt that the various code explanations are frankly convoluted on first glance - as well as subsequent horrified glances. (I tried to find a link that illustrated the screen layout to explain it better, but drew a blank. Not surprised really. The best way to understand it is to observe a loading screen in any emulated Spectrum game.)
Screw this, thinks the aspiring games maker. I’ll get a C64 instead, they’re meant for gaming. And it’s true, the Commodore 64 had hardware sprites and a proper sound chip, and you could even program them from BASIC by POKEing values into memory addresses - in fact, that was the only way to program them in any language. C64 programmers usually graduated to assembly language in a very short space of time, because Commodore BASIC was so awful as to be little different from it. But hold on, the C64 uses banks of memory and so-called ‘shadow RAM’ - its 20K of ROM is actually overlaid in 4K blocks on the 64K RAM. This is a plus in one sense, in that you can copy the ROM to the underlying RAM, then switch out that bank and modify the copy or use the shadow RAM for something else entirely, but it also adds the complications of managing all these disconnected bits of memory, not all of which is accessible at once by the various custom chips. Add to which, it turns out that 6502 assembly is even more primitive than Z80.2 Good luck, kid! Oh and by the way, the Internet isn’t generally available yet. You can’t google the answers. You’ll have to buy these huge thick programmers reference guides and figure it out. (Until machine-specific coding books appeared, I regularly saw recommendations to use generic Z80 or 6502 programmer texts, such as those by Rodney Zaks, which is a bit like telling someone to learn how to drive by studying Haynes manuals.)
Needless to say, I ended up owning Toni Baker’s classic Mastering Machine Code on the ZX Spectrum, the Commodore 64 Programmer’s Reference Guide and at least two thick system programming references for the Amiga. (You can find scans of all of these online now, btw, in case you run out of insomnia meds.) I knew the lore of programming these systems at a low level almost off by heart by dint of hours spent poring over such tomes, but it was entirely theoretical - absolutely none of it ever came to be applied on a practical level. I didn’t even get round to buying and then never using an assembler program for the C64. Simply reading about it became sufficient. Such is the curse of being bright but staggeringly lazy. (This is the point where most functioning people can write, “However, at about this time I discovered girls instead.” Yeah, that didn’t happen either.)
How then, did all this useless esoterica and unapplied erudition benefit us? Simply, unlike the previous generation, we were the first to generally understand that computers were not the all-powerful, all-knowing, ‘intelligent’ entities of popular sci-fi - that they were in fact dumb as rocks. But if you could command the rocks into life and make them perform the most basic tasks, they would be able to do them much faster and more accurately than any person. And if you could break any complex task down into its most elemental steps, it could do that too. The earliest home computer games were based entirely on the simplest possible manipulation of screen characters - moving single ASCII symbols around one block at a time in response to either player command or some inner brutal logic:
410 IF player_position>alien_position THEN
420 LET alien_position = alien_position-1
430 END IF
Within two or three years, game sprites were obeying the laws of mass, inertia, gravity and thermodynamics - simulated entirely by self-taught teenagers working at the lowest level of CPU programming (though I wasn’t one of them, obviously). I’m not going to point at modern teens and sneer about vlogging by comparison, as I think that also demands a similar level of skill in psychology, presentation and narrative, but this was probably the only time in history at which ‘deep science’ subjects were broadly comprehended by a significant part of a generation. This stuff was quite literally rocket science.
Moreover, for me personally, my maths finally started to improve. Content to bump along with the minimal adequate effort in a middle set, I suddenly decided maths might actually be useful in future and began to pay attention. (I learnt trig from the ZX Spectrum manual before the class covered it.) Miss B, bless her soul, pushed my lazy arse with extra tuition to get me moved up to the top set and eventually, O and A Level grades. (And yes, I work in IT now, although the gulf between what I did on my ZX Spectrum and what I do now is incomprehensibly vast.)
As the magazines began to target specific platforms, I lapped up all their technical articles and avidly read the installments of their “programmers diary” features (a genre surely rivalled only by “Journal of a Chartered Accountant” and “The Librarian’s Day” for exotic thrills), such as Andrew Braybrook’s wonderful Birth of a Paradroid in the otherwise risible Zzap!64. The newly launched Your Spectrum mag set out their stall early on with a series of tutorials by Toni Baker on writing machine code for the Spectrum; prior to this, we’d had to endure the excruciatingly worthy Sinclair User with their ‘User of the month’ feature, about people who’d use their ZX printer to print their shopping lists.3 (Worse, they often put the user on their cover, which was frequently embarrassing when classmates found it amusing to identify me with the person in question. The toddler was bad enough, but the stick I got for the Morris Dancer one - the resemblance was plain, apart from the costume - was mortifying. But then I was naïve enough to be seen with it openly in school.)
“People could program any of the new machines, but the expectation was that typical users would just use them as software players.”
The Amiga, when it eventually became generally available, was potentially easier to develop software on, as it had custom chips that could perform a lot of the work independently, a proper 16 bit architecture and a well-developed set of OS libraries to handle the basic tasks - but by that point, the complexity and expectations of the typical game were almost beyond what one person could hope to satisfy. Although this generation of hardware would be the peak of the home computing market, it also marked the slow death of the amateur ethos that had lingered from the early days. From here on, the market split into gaming consoles and desktop PCs, both of which were basically appliances for running purchased software. (Those who still enjoyed typing stuff in were by now discovering UNIX at university, which would lead them to Linux so they could run it at home.) In the face of the new 16 bit machines, the makers of the older 8 bit computers attempted to launch enhanced versions - the Commodore 128, Spectrum+, etc - but found themselves hampered by the petrified architecture of their ancestors and the need to maintain compatibility with the existing software base. This was before any thought of “graceful degradation” or API versioning (or, indeed, APIs), and so the legacy system was usually bolted on as a ‘compatibility mode’ in a way that even Dr Frankenstein would think inelegant.
To read Lean’s history, one might think that home computing was a pervasive craze among all youngsters in the early 1980s. It was significantly popular - mainly for the opportunity to play arcade games and the easy way one could copy and swap those games with others - but to think this held true across the entire playground would be hugely mistaken. In my experience, the kids who were most obsessive about these new devices were the ones who would have have been obsessed with any highly technical, esoteric and largely insular hobby. Those users who were actually doing any serious programming were a select few; those working commercially even fewer; and of those, probably only a handful writing code to the level of the top games. I don’t recall much interest in programming amongst my peers, but then others sharing that interest in my year may also have been similarly introverted and quiet, and if so, they were probably wiser than me in keeping it to themselves (whereas I might as well have been wearing a “Geek, kick me” t-shirt). A little later, when the school computer room opened, offering a new source of breaktime respite to those who had previously concealed themselves in the library, I was the only member of my year regularly in attendance. There were some younger boys, who seemed most interested in provoking irritation, and a group of older know-alls who would pronounce with authority on the subject, some of whom were even mature enough to talk to girls with passable confidence, which to me was akin to having a superpower. Not that we saw many girls in the computer room during breaks - if it hadn’t been for formal lessons, almost none of them would ever have got near a BBC Micro. (The lessons weren’t available to the upper years, probably because the school didn’t offer O level Computer Studies and nobody could be bothered to integrate IT into their traditional exam curriculae.)
It was through this lab that I first encountered computer networking, which was initially seen within the industry as a way to share expensive printers and disk drives across several machines rather than to allow user communication. Once again, I hit the system programming guides to see if it was possible to hack into remote BBCs using Acorn’s ‘Econet’ protocol and make them misbehave, but it seemed it was insufficiently vulnerable to allow that without collusion. We went back to distracting Mr M., who was nominally ‘in charge’ of the room, so we could liberate the floppy disks holding his games collection (a session on Elite being the main bribe used to get kids to complete their computer work). The worst that could happen was being identified by a teacher as ‘someone good with computers’ and therefore liable to being pressed into service on whatever IT project they had in mind, most of which were viewed as cringeworthy and twee by those of us who found ourselves volunteered. It may seem strange and hypocritical that a bunch of computer geeks could develop any concept of what was ‘uncool’, but there were definite lines drawn within our little sect. Publishing a school newsletter: yuk. Hacking the caveman game FRAK! so that he uttered a similar but different four-letter word: cool.
Weirdly though, a shared interest in computer games, even though I wasn’t particularly good at or even especially keen on playing them, finally kickstarted some sort of social life outside of school. Friends would come over for late evening sessions of Revenge of the Mutant Camels or, even better, 2-up Double Dragon. (Jetpac, in which the aliens exploded with short farty noises, was a particular favourite when we discovered that hammering the pause key during an explosion could draw this out into a prolonged raspberry, to great hilarity.) This would eventually lead in sixth form, when we were old enough to get away with underage drinking, to nights out at the pub instead. (It was at this point that I discovered the undemanding joys of ‘having a laugh with your mates’, and computers finally began to take a healthier backseat to everything else in adolescence - except girls. Plus ça change.)
My last ‘home’ computer was an Amiga A1200, with an internal hard drive, bought with the seeming riches earned from my first post-graduation job as the closest thing I could get to the UNIX workstations I used at work - you could even download a version of C-shell for it (although it wouldn’t run Minix due to the lack of a Memory Management Unit in the pared-down CPU used, and believe me I tried). Unfortunately, the hard drive - an inconceivably expansive 80MB Western Digital model - had an infuriating habit of locking up when it became warm, making it very frustrating to use. There was still no widely available Internet access outside of academia (not that the Amiga supported TCP/IP either, and even on Windows it was a dreadful hack), and I still didn’t find the time to do any proper programming. When I moved on to a non-academic job, I bought a cheap Macintosh and I don’t now recall what happened to that Amiga.
One of the most pleasant surprises of writing this post was the discovery that it’s (almost) all still around. There are images of all the old games across various sites that you can download to run under faithful emulation on the platform of your choice. (Manic Miner on a tiny smartphone screen? Seems crazy but you can do it. Even Gobbleman is out there.) There are native cloned versions of the most popular ones if you prefer that. There are lots of scanned magazines going back to the earliest days of the hobby, and PDF scans of complete books. There are even people still writing games for these systems today. I suppose working in a modern development environment must feel a bit like skimming the outermost atmosphere of a particularly large, murky planet with an unfathomable ecology beneath, whereas an 8 bit computer is like a decently sized asteroid whose entire structure you can understand inside and out within a reasonable time. At this degree of enlargement, one realises fundamentally that everything a computer does only ever involves, at the most basic level, adding small numbers together.
Tom Lean’s laudable book brought all this back to me. I pulled up a few emulators and tried out the old games again. And I’m still useless at them, and I still can’t be bothered to persist with any but the very easiest for more than ten minutes. I read a few modern blog posts on writing machine code games for these ancient platforms - and quickly realised that I still have neither the time nor the inclination to go very far down that rabbit hole. I even finally tried a bit of online 6502 assembly programming. But what an agreeable Proustian rush it’s been to look back on those pioneer days. Thanks, Tom.
(All highlighted quotes taken from the book.)
Honorable mentions: Impossible Mission, Beam Rider, Elite.
I’ve still yet to see the fabled ‘pattern makers dowels’ referenced in every guide to joining model railway baseboards together, and I carted two such boards around for several years, fated never to be linked in the ongoing absence of these items and my apparent inability to consider alternatives.↩
There’s nothing like LDIR on the 6502, for example, which can shift 64K chunks of data in one instruction.↩
…He said sneeringly. Actually, I just dug out the article I was thinking of. It’s from p.66 of Issue 16 of Sinclair User from July 1983: an interview with a lady called Mrs Celia Sims who has a 1K ZX81 for her and her two sons. Although the journalist appears to be forcing a story out of not-very-much, on rereading I find Mrs Sims to be one of the most progressive and insightful computer users I’ve encountered, and a rare voice in computing both then and now. But while SU’s selected users showed an overlooked diversity within the hobby, they were typically also middle class parents and carers keen to give their kids a further leg up from their already privileged positions.↩
This appears difficult because registered variables aren’t obviously designed to work with lists and you want the task to check the results in one list variable while processing elements from another data structure.
The key is to realise that register absolutely will work with a list or dictionary and that the resulting registered variable (which contains a dictionary) will store each element of the original list alongside the task results.
In this specific context, I wanted to set a number of resource controls in
/etc/projects
on a Solaris 11 client. Ansible doesn’t yet have a module
to manipulate these natively, so you need to execute the Solaris
projadd/projmod commands. Annoyingly, you can add a new project or modify
an existing one, but there is no semantic for “create this project if it
doesn’t exist but modify its attributes if it does” (hint to Oracle: add a
flag for ‘create project if not present’ to the projmod command).
Therefore, first of all we need to determine if the project(s) we want to
create already exist. In this case, I have a list of database instances
which each have a dedicated project and a number of associated attributes.
The instances are listed in a YAML dictionary keyed by the name, with specific per-instance resources (such as maximum shared memory) stored as values. E.g.:
1 2 3 4 5 |
|
etc.
Now we iterate over this dictionary to see if a project exists for each instance (naturally, the projects are named after the instances):
1 2 3 4 5 6 |
|
(Note that we need to ignore errors, since we’re expecting that at least
some of the projects may not exist.)
The proj
variable will contain a list of results from the command, one for
each instance (strictly, each execution). If we display it using the
debug module, we can see that each result also contains a complete copy
of the subject element from the original dictionary, including all the
values:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
|
Now we want to run the projadd command for each instance where the rc value was
non-zero (meaning that the project didn’t already exist, as the projects -l
command returned an error.
1 2 3 4 |
|
In this case, that means iterating over the list of results (not the original
list of instances; the results reference all the elements of the instance
dictionary anyway), so that we can test the rc value for each one. If it’s
non-zero, the command is executed. Within the command arguments, we can refer
to the instance attributes by utilising the unwieldy but correct names
item.item.key
(the key of the original item from the instance dictionary
that is stored in this item of our list) and
item.item.value.
valuename (a value associated with that key). (Note
that most of the resource limits are hardcoded in this example, but
there’s no reason they couldn’t be taken from the instances dictionary
with - at the expense of increased verbosity - default values as
fallbacks.)
For bonus marks, we can code a second pass to execute projmod for instances where the project already exists, to reset the resource attributes. This may mean resetting them to the same value on each run, which is an unfortunate overhead but no harm done. (As an alternative shortcut, we could execute projadd in the initial registered task and then run projmod for all the results where that failed because the project already existed.)
Note that this technique will also work on lists or any other iterative data structure. (Tested on Ansible 2.1, but ought to work on most late releases.)
Half of these were the result of browsing a single post from the illustrious Quad Royal blog. Praise be for the easy availability of used and discounted books online, and let’s be grateful that at least this little spree didn’t involve the gorgeous but often pricey vintage posters that are the usual currency of that site.
Aside from those, Designing the Seaside is a hefty paperback tome with glossy pages and plenty of colour images that covers the entire panoply of seaside architecture and fixed emphemera, including hotels, piers, shelters, pools and anything else that’s become heavily blurred by many, many coats of white or pale blue paint over salt-induced rust.
Naturally, you get a good sampling of classic travel posters. Author Fred Gray has made a comprehensive, scholarly study of his subject with plenty of well chosen case studies (including the freeze-framed gradual collapse of Brighton’s West Pier), but the text remains readable and the picture selection will keep you turning the pages.
“James Ravilious’s great arcadian elegy of Devon is surely one of photography’s great accomplishments, a monument like August Sander’s People of the 20th Century or Eugene Atget’s Paris streets. It stands as a collective masterpiece and a high water mark, like the string quartets of Haydn or Ella Fitzgerald’s songbooks in music.”
- Mike Johnston, The Online Photographer
However, my best find was this copy of the James Ravilious monograph, An English Eye. As you might guess from that distinctive surname, James was the son of the artist Eric Ravilious; in fact, he even tried to adopt an assumed name on applying to art school to avoid the presumptions that came with the family one (but was recognised at the interview). More interestingly, the notable mid-century English photographer Edwin Smith and his wife Olive Cook were family friends and, although Ravilious did not pursue photography until after Smith’s death, he shares the latter’s intuitive understanding of the cadences of the black and white image.
The photographs highlight an overlooked area of the nation and a way of life that was fast disappearing even as James Ravilious meticulously documented it. As well as a fine selection of his work, drawn mainly from his roving brief across North Devon but extending into the continent, you get some interesting essays covering his life and techniques.
Recommended to anyone with an appreciation for strong monochrome photojournalism or an interest in the byeways of 20th Century Britain. Publisher Bardwell Press is currently selling off the remaining copies of this book for a hundred pounds a pop, but I found one via Blackwell’s for a bit less than the original RRP - a steal at less than a fifth of the going rate. (Unfortunately, they’ve since reverted to the publisher’s mark-up.) Worth a look to see if any other booksellers have yet to update their price stickers.
If you’re disappointed at missing out on this, let me direct you instead towards the recent RIBA reprint of Evocations of Place: The Photography of Edwin Smith, a fine alternative that’s still easily obtainable at retail.
(Images shown remain the copyright of their respective owners.)
]]>In Autumn, I should be sat by the fireside in a big armchair inside a cosy rural pub, resting a pint on my full belly with a large empty plate on the table in front of me.
In Winter, I must be at home, with the heating on, a blanket over my legs because the heating isn’t sufficient, something good on the telly and a large bowl of something else warm and stodgy in front of me.
In Spring, I should be sat outside near the coast, with a fresh breeze on my face and a decent picnic spread nearby.
I need you to understand that anything else you want from me at such times is an unwelcome imposition, and I am deeply disappointed that you feel it acceptable even to ask.
[Note to self: probably get some exercise at some point too, with all that food.]
]]>The destruction of the original Arch as part of the rebuild of Euston railway station by the British Transport Commission drew great protest at the time, not least from the likes of John Betjeman as a rehearsal for his later, successful campaign to save St Pancras station. Alongside the poet, several influential voices in the architecture profession came out against the proposal, and a deputation that included J.M. Richards, the editor of the Architectural Review, met with Harold Macmillan to lobby for the Arch’s preservation. Famously, Richards recorded that Macmillan “sat without moving with his eyes apparently closed” and “said nothing” throughout their pleas - leading to the popular supposition that he fell asleep and had no interest in the matter, the government’s mind already being decided.
But as cabinetroom shows, Macmillan was sufficiently stirred to take the matter back to Cabinet for reconsideration. Notably, the Victorian Society had found a contractor prepared to move the Arch closer to Euston Road on rollers, which seems like a fantastical idea (although it has subsequently been done with that lighthouse, of course…). But the government was generally against saving the Arch owing to the cost, some of which they expected would fall on the public purse, and the additional delay to construction. It’s likely that the further discussion focused on nothing more than the potential political fallout of ignoring the protesters - and a bunch of middle-class aesthetes are always easier to ignore than most in politics. The decision was re-affirmed and the demolition went ahead. (Apparently, the contractor even offered to dismantle and rebuild the Arch at their own expense, but the government allegedly dismissed any potential new site out of hand. Most of the stonework was acquired by British Waterways to fill a chasm in the bed of the River Lea, until it was rediscovered by Dan Cruickshank in 1994 and later recovered in 2009 during work related to the Olympic Park.)
The relevant bodies were stirred to arms again by the proposed rebuilding of St Pancras and Kings Cross stations shortly later, and this time their entreaties prevailed - possibly because station buildings already demonstrably serving their purpose were harder to condemn than a grand but largely ornamental edifice of no current practical value. Ironically, the recent proposed reconstruction of a similar arch in a more prominent location, between the existing historic lodges on Euston Road now in use as pubs, has been decried on the grounds that it would obstruct the entrances to the latter and force them to reconfigure the doorways.
Ten years before the Arch debate, an earlier Conservative government was also involved in the demolition of a much-loved national landmark, albeit one of briefer duration. Winston Churchill is typically fingered as the villain in the destruction and clearance of the 1951 Festival of Britain site on the South Bank, including the iconic Dome of Discovery and Skylon, the suspended steel and aluminium needle point tower; most writers note that he saw the Festival as “three-dimensional socialist propaganda”, and signed the clearance order as his first act on returning to office. But I wondered if this was not a dewy-eyed revision of history, similar to the tales of Macmillan snoozing oblivious to the desecration of Euston. Most of the South Bank Festival, with the exception of the Royal Festival Hall, was designed to be ephemeral and temporary, and probably would not have withstood prolonged use without further attention. The process of dissolving the organisation had begun almost as soon as the Festival itself started, with the dismissal of chief architect Hugh Casson, and continued immediately after its close with further redundancies. Physical demolition may have simply been outstanding business for the government of the day, postponed by the exigencies of a general election, and therefore increasingly urgent and requiring no further debate by the time the Tories took office a month later (it’s not immediately clear to me what pressure was on the site at the time, so it’s difficult to judge whether this haste was unseemly, although given that much of it reverted to wasteland and car parking in the aftermath, this seems probable).
When General Lord Ismay, as chairman of the Festival Committee, had previously met his former chief seeking his help in taming the Beaverbrook press campaign against the event in the run-up, Churchill had conceded, “All right Pug, you old fool, you can have your damned festival.” Perhaps at that stage he did not realise what form it would take. There again, Churchill later wrote, in a letter to former deputy PM Herbert Morrison to notify that they would be keeping the Battersea Pleasure Gardens open for the interim, “out of our love for you we are going to do what you wish - also to try to get a little of the money back that was wasted.” [quoted in Harriet Atkinson’s The Festival Of Britain: A Land and its People]. (In the event, the gardens would linger on for a further twenty-five years, although in increasing disrepair due to minimal funding. They can be seen in an episode of The Prisoner entitled “The Girl Who Was Death”.) This may have been Churchill having a wicked little joke at the expense of Morrison (the government considered the Festival a prime example of Labour’s ‘squandermania’), but it reads more like the sour grunt of a resentful curmudgeon.
Five months later, the contractors were on site at the South Bank. In a twist for posterity, one mythical resting place of Skylon’s remains became conflated with that of the Euston Arch, in the River Lea. (Others believed that Skylon was toppled into the Thames and lies there still. In a 1994 television programme, Dan Cruickshank found no remnants of Skylon beyond a brass ring originally located underneath it, which is now on display at the Museum of London.) No, it does rather seem that the Tory government actively wanted rid of it as quickly as possible, and Churchill’s personal antipathy, as churlish and dogmatic as it may seem, certainly appears to have played a part in that.
BB will not be the first to point out how little Conservation is inherent in Conservatism.
The Fountain Restaurant was built in 1960 on the site of a riding school previously acquired by Chester Zoo. Its low, flat-roofed form and squared design, albeit carefully tapering outwards from the central water feature and gardens, mark it as a pleasing example of Fifties Modern architecture, of a piece with contemporaneous works such as the Pennine Tower at Forton Services on the M6 and some of BR’s rebuilt midsized stations of the period (e.g. Radcliffe Central).
Once the centrepiece of the zoo and the favoured picnic spot for many visitors, the writing was on the wall for it when the main entrance was relocated from the vicinity behind the building to the other side of the park, and today it can appear to the modern visitor as somewhat of a backwater on the way to the lions. Latterly, it lost the catering function (except for a snack kiosk at the rear) and was converted into a gift shop, probably when the Ark Restaurant (redeveloped as June’s Pavilion in 2011) opened near the location of the old entrance gates. Most likely the building’s layout and location, partway into the site, made it unsuited to a modern food facility. However, the vibrant flower displays that are a highlight of the postcard view above do seem to have become rather toned down now and there is a sense that the times have moved on from this unfairly neglected gem in the midst of the zoo - on each visit, I find myself wondering if the fountain and shop are fated to be the subject of the next ‘exciting’ redevelopment, which would be a pity. To be fair, Chester is amply supplied with appealing lunch spots and with the fountain now further from the entrance, you’re more likely to be ready for a break by the time you reach it - although you might equally have reached some other distant point on this extensive site by then. And of course, unless you have food with you, the nearest source of comestibles is not immediately apparent. If they’re not already poised to redraw this attractive corner of their estate, there’s a promising opportunity here for the Zoo’s management to refresh a key piece of its legacy and give a focal point to their heritage aspects.
[Disclaimer: Having moved away from the area, I haven’t visited the Zoo for a number of years now and it’s possible that matters have moved on from the state of affairs above, for better or worse.]
]]>