Ansible now contains a decent set of modules for managing virtual machines in VMware vSphere. As ever with Ansible, the key is not so much in knowing how to use these modules (which the docs explain fairly clearly) as in knowing how to organise the playbooks that call them. Here’s one example based on our own recent practice.
To use this, you should be running at least Ansible 2.7.5 as the module
was broken in older 2.7 releases.
I assume you already have the prerequisites, including an account with
administrator privileges on your vCenter, a basic knowledge of how to
create a new VM in vSphere, and so forth. To enable us to create new VMs
for the systems we need to manage with our playbooks, first of
all we wrap the
vmware_guest module in a role. The role uses a
combination of standard or typical default values, global variables for common
settings and per-host variables specific to the VM in question. For our
own purposes, we only need to be concerned with basic Linux VMs of a mostly
similar specification, so we don’t worry about customising the
configuration for different OS platforms.
For example, the role defaults might be:
1 2 3 4 5 6
This defines our standard VM SCSI controller, firmware, disk provisioning, hardware version and network device (all of these are compatible with CentOS, for example).
The global settings are defined in the group variables for ‘all’ hosts, and specify the local vCenter, site-specific names like the vSphere data centre and overall common settings for the VMware modules:
1 2 3 4
Here we authenticate to the vCenter using our central directory, so we use the logged-in ID of the person running the playbook as the VMware username. Alternatively, you can create a specific account with limited privileges in vSphere for Ansible to use.
Finally, we configure the VM details in the host variables file under
1 2 3 4 5 6 7 8 9 10 11 12
(VM folder names are prefixed with the data centre name followed by ‘/vm’. Note that in practice with this structure, one can define several VMs together in a list - e.g. within the group variables - but this is not necessary. In most cases, it’s probably cleaner to separate the VM configs by individual host.)
In the top level playbook, we also need to request the password for the vCenter user (or fetch it from a secure vault if using a specific account for Ansible):
1 2 3 4 5 6 7 8 9 10
We need to disable fact gathering as the hosts we’re creating may not exist yet so Ansible can’t connect to them.
Finally, we pull all these variables into a task defined within the
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31
The key thing here is that the task is delegated to ‘localhost’, i.e. the
Ansible control node, and therefore the connection to the vCenter to
create the VM will occur from the host where Ansible is run. (You can use
a different host such as a dedicated vSphere management server but Ansible
must be able to connect to it and it must have the Pyvmomi library
installed.) This task loops through the
vmware_vms list and creates each
VM defined there through the vCenter.
If you change the settings for any VM, Ansible will attempt to modify its configuration in vSphere if possible. For example, you can adjust the allocated memory in a running VM (assuming the guest OS supports it and hot-adding memory is enabled for the VM) but attempting to shrink a virtual disk returns an error.
Currently, due to Ansible bug #34105,
isn’t fully idempotent if you’re using distributed switches in your
vSphere networking configuration; the task will report ‘changed’ every
time it is run and you will see a “Reconfigure Virtual Machine” task
logged in the vCenter, even if no aspect of the VM has been altered.
(There’s a PR for this bug but it doesn’t appear to have been merged yet.)
If this concerns you, you can first run a
vmware_guest_find task to
search for the listed VMs in vCenter, register a variable and use the
result of that to drive the creation of any VMs that return ‘failed’ (see
my previous post on using multiple values in a registered variable).
Obviously, at this point you’d still need to power on the new VM and
install an OS on it. In fact, you’d probably instead deploy from a
pre-built template, using the
customization parameters of
vmware_guest to configure it. The
vmware_guest_powerstate module could
then be used to power it up and initialise it, followed by
vmware_guest_tools_wait to pause until it’s ready.