A vSphere environment is available, and a user with granted access
An internal web server to store files
Web access to the Rocky Linux repositories
An ISO of Rocky Linux
An Ansible environment is available
It is assumed that you have some knowledge of each product mentioned. If not, dig into that documentation before you begin.
Vagrant is not in use here. It was pointed out that with Vagrant, an SSH key that was not self-signed would be provided. If you want to dig into that you can do so, but it is not covered in this document.
Of course, you can adapt this how-to for other hypervisors.
Although we are using the minimal ISO image here, you could choose to use the DVD image (much bigger and perhaps too big) or the boot image (much smaller and perhaps too small). This choice is up to you. It impacts in particular the bandwidth you will need for the installation, and thus the provisioning time. We will discuss next the impact of the default choice and how to remedy it.
You can also choose not to convert the virtual machine into a template. In this case you will use Packer to deploy each new VM, which is still quite feasible. An installation starting from 0 takes less than 10 minutes without human interaction.
Packer is an open-source virtual machine imaging tool, released under the MPL 2.0 license, and created by HashiCorp. It will help you automate the process of creating virtual machine images with pre-configured operating systems and installed software from a single source configuration in both, cloud and on-prem virtualized environments.
With Packer you can create images to be used on the following platforms:
In the following examples, the assumption is that you are on a Linux system.
As we will connect to a VMware vCenter Server to send our commands via Packer, we need to store our credentials outside the configuration files which we will create next.
Let us create a hidden file with our credentials in our home directory. This is a json file:
Those credentials need some grant access to your vSphere environment.
Let us create a json file (in the future, the format of this file will change to the HCL):
{"variables":{"version":"0.0.X","HTTP_IP":"fileserver.rockylinux.lan","HTTP_PATH":"/packer/rockylinux/8/ks.cfg"},"sensitive-variables":["vcenter_password"],"provisioners":[{"type":"shell","expect_disconnect":true,"execute_command":"bash '{{.Path}}'","script":"{{template_dir}}/scripts/requirements.sh"}],"builders":[{"type":"vsphere-iso","CPUs":2,"CPU_hot_plug":true,"RAM":2048,"RAM_hot_plug":true,"disk_controller_type":"pvscsi","guest_os_type":"centos8_64Guest","iso_paths":["[datasyno-contentlibrary-mylib] contentlib-a86ad29a-a43b-4717-97e6-593b8358801b/3a381c78-b9df-45a6-82e1-3c07c8187dbe/Rocky-8.4-x86_64-minimal_72cc0cc6-9d0f-4c68-9bcd-06385a506a5d.iso"],"network_adapters":[{"network_card":"vmxnet3","network":"net_infra"}],"storage":[{"disk_size":40000,"disk_thin_provisioned":true}],"boot_command":["<up><tab> text ip=192.168.1.11::192.168.1.254:255.255.255.0:template:ens192:none nameserver=192.168.1.254 inst.ks=http://{{ user `HTTP_IP` }}/{{ user `HTTP_PATH` }}<enter><wait><enter>"],"ssh_password":"mysecurepassword","ssh_username":"root","shutdown_command":"/sbin/halt -h -p","insecure_connection":"true","username":"{{ user `vcenter_username` }}","password":"{{ user `vcenter_password` }}","vcenter_server":"vsphere.rockylinux.lan","datacenter":"DC_NAME","datastore":"DS_NAME","vm_name":"template-rockylinux8-{{ user `version` }}","folder":"Templates/RockyLinux","cluster":"CLUSTER_NAME","host":"esx1.rockylinux.lan","notes":"Template RockyLinux version {{ user `version` }}","convert_to_template":true,"create_snapshot":false}]}
We will use the variable version later in the template name we will create. You can easily increment this value to suit your needs.
We will also need our booting virtual machine to access a ks.cfg (Kickstart) file.
A Kickstart file contains the answers to the questions asked during the installation process. This file passes all its contents to Anaconda (the installation process), which allows you to fully automate the creation of the template.
The author likes to store his ks.cfg file in an internal web server accessible from his template, but other possibilities exist that you may choose to use instead.
After the installation is finished, the VM will reboot. As soon as Packer detects an IP address (thanks to the VMware Tools), it will copy the requirements.sh and execute it. It is a nice feature to clean the VM after the installation process (remove SSH keys, clean the history, etc.) and install some extra package.
You will never forget again to include CPU_hot_plug as it is automatic now!
You can do more cool things with the disk, cpu, etc. You should refer to the documentation if you are interested in making other adjustments.
To start the installation, you need an ISO image of Rocky Linux. Here is an example of how to use an image located in a vSphere content library. You can of course store the ISO elsewhere. In the case of a vSphere content library, you have to get the full path to the ISO file on the server hosting the content library. In this case it is Synology, so directly on the DSM explorer.
Then you have to provide the complete command to be entered during the installation process: configuration of the IP and transmission of the path to the Kickstart response file.
Note
This example takes the most complex case: using a static IP. If you have a DHCP server available, the process will be much easier.
This is the most amusing part of the procedure: I'm sure you will go and admire the VMware console during the generation, just to see the automatic entry of the commands during the boot.
"boot_command":["<up><tab> text ip=192.168.1.11::192.168.1.254:255.255.255.0:template:ens192:none nameserver=192.168.1.254 inst.ks=http://{{ user `HTTP_IP` }}/{{ user `HTTP_PATH` }}<enter><wait><enter>"],
After the first reboot, Packer will connect to your server by SSH. You can use the root user, or another user with sudo rights, but in any case, this user must correspond to the user that is defined in your ks.cfg file.
At the end of the process, the VM must be stopped. It is a little bit more complicated with a non-root user, but it is well documented:
"shutdown_command":"/sbin/halt -h -p",
Next, we deal with the vSphere configuration. The only notable things here are the use of the variables defined at the beginning of the document in our home directory, as well as the insecure_connection option, because our vSphere uses a self-signed certificate (see note in Assumptions at the top of this document):
"insecure_connection":"true","username":"{{ user `vcenter_username` }}","password":"{{ user `vcenter_password` }}","vcenter_server":"vsphere.rockylinux.lan","datacenter":"DC_NAME","datastore":"DS_NAME","vm_name":"template-rockylinux8-{{ user `version` }}","folder":"Templates/RockyLinux","cluster":"CLUSTER_NAME","host":"esx1.rockylinux.lan","notes":"Template RockyLinux version {{ user `version` }}"
And finally, we will ask vSphere to convert our stopped VM to a template.
At this stage, you could also elect to just use the VM as is (not converting it to a template). In this case, you can decide to take a snapshot instead:
As noted above, we need to provide a kickstart response file that will be used by Anaconda.
Here's an example of that file:
# Use CD-ROM installation media
repo--name="AppStream"--baseurl="http://download.rockylinux.org/pub/rocky/8.4/AppStream/x86_64/os/"
cdrom
# Use text install
text
# Don't run the Setup Agent on first boot
firstboot--disabled
eula--agreed
ignoredisk--only-use=sda
# Keyboard layouts
keyboard--vckeymap=us--xlayouts='us'# System language
langen_US.UTF-8
# Network information
network--bootproto=static--device=ens192--gateway=192.168.1.254--ip=192.168.1.11--nameserver=192.168.1.254,4.4.4.4--netmask=255.255.255.0--onboot=on--ipv6=auto--activate
# Root password
rootpwmysecurepassword
# System services
selinux--permissive
firewall--enabled
services--enabled="NetworkManager,sshd,chronyd"# System timezone
timezoneEurope/Paris--isUtc
# System booloader configuration
bootloader--location=mbr--boot-drive=sda
# Partition clearing information
clearpart--all--initlabel--drives=sda
# Disk partitionning information
part/boot--fstype="xfs"--ondisk=sda--size=512
partpv.01--fstype="lvmpv"--ondisk=sda--grow
volgroupvg_root--pesize=4096pv.01
logvol/home--fstype="xfs"--size=5120--name=lv_home--vgname=vg_root
logvol/var--fstype="xfs"--size=10240--name=lv_var--vgname=vg_root
logvol/--fstype="xfs"--size=10240--name=lv_root--vgname=vg_root
logvolswap--fstype="swap"--size=4092--name=lv_swap--vgname=vg_root
skipx
reboot
%packages--ignoremissing--excludedocs
openssh-clients
curl
dnf-utils
drpm
net-tools
open-vm-tools
perl
perl-File-Temp
sudo
vim
wget
python3
# unnecessary firmware
-aic94xx-firmware
-atmel-firmware
-b43-openfwwf
-bfa-firmware
-ipw2100-firmware
-ipw2200-firmware
-ivtv-firmware
-iwl*-firmware
-libertas-usb8388-firmware
-ql*-firmware
-rt61pci-firmware
-rt73usb-firmware
-xorg-x11-drv-ati-firmware
-zd1211-firmware
-cockpit
-quota
-alsa-*
-fprintd-pam
-intltool
-microcode_ctl
%end
%addoncom_redhat_kdump--disable
%end
%post
# Manage Ansible access
groupadd-g1001ansible
useradd-m-g1001-u1001ansible
mkdir/home/ansible/.ssh
echo-e"<---- PAST YOUR PUBKEY HERE ---->">/home/ansible/.ssh/authorized_keys
chown-Ransible:ansible/home/ansible/.ssh
chmod700/home/ansible/.ssh
chmod600/home/ansible/.ssh/authorized_keys
echo"ansible ALL=(ALL:ALL) NOPASSWD:ALL">/etc/sudoers.d/ansible
chmod440/etc/sudoers.d/ansible
systemctlenablevmtoolsd
systemctlstartvmtoolsd
%end
As we have chosen to use the minimal iso, instead of the Boot or DVD, not all required installation packages will be available.
As Packer relies on VMware Tools to detect the end of the installation, and the open-vm-tools package is only available in the AppStream repos, we have to specify to the installation process that we want to use as source both the CD-ROM and this remote repo:
!!! "Note"
If you do not have access to the external repos, you can use either a mirror of the repo, a squid proxy, or the DVD.
# Use CD-ROM installation media
repo--name="AppStream"--baseurl="http://download.rockylinux.org/pub/rocky/8.4/AppStream/x86_64/os/"
cdrom
Let us jump to the network configuration, as once again, in this example we are not using a DHCP server:
# Network information
network--bootproto=static--device=ens192--gateway=192.168.1.254--ip=192.168.1.11--nameserver=192.168.1.254,4.4.4.4--netmask=255.255.255.0--onboot=on--ipv6=auto--activate
Remember we specified the user to connect via SSH with to Packer at the end of the installation. This user and password must match:
# Root password
rootpwmysecurepassword
Warning
You can use an insecure password here, as long as you make sure that this password will be changed immediately after the deployment of your VM, for example with Ansible.
Here is the selected partition scheme. Much more complex things can be done. You can define a partition scheme that suits your needs, adapting it to the disk space defined in Packer, and which respects the security rules defined for your environment (dedicated partition for /tmp, etc.):
# System booloader configuration
bootloader--location=mbr--boot-drive=sda
# Partition clearing information
clearpart--all--initlabel--drives=sda
# Disk partitionning information
part/boot--fstype="xfs"--ondisk=sda--size=512
partpv.01--fstype="lvmpv"--ondisk=sda--grow
volgroupvg_root--pesize=4096pv.01
logvol/home--fstype="xfs"--size=5120--name=lv_home--vgname=vg_root
logvol/var--fstype="xfs"--size=10240--name=lv_var--vgname=vg_root
logvol/--fstype="xfs"--size=10240--name=lv_root--vgname=vg_root
logvolswap--fstype="swap"--size=4092--name=lv_swap--vgname=vg_root
The next section concerns the packages that will be installed. A "best practice" is to limit the quantity of installed packages to only those you need, which limits the attack surface, especially in a server environment.
Note
The author likes to limit the actions to be done in the installation process and to defer installing what is needed in the post installation script of Packer. So, in this case, we install only the minimum required packages.
The openssh-clients package seems to be required for Packer to copy its scripts into the VM.
The open-vm-tools is also needed by Packer to detect the end of the installation, this explains the addition of the AppStream repository. The packages perl and perl-File-Temp will also be required by VMware Tools during the deployment part. This is a shame because it requires a lot of other dependent packages. python3 (3.6) will also be required in the future for Ansible to work (if you won't use Ansible or python, remove them!).
You can not only add packages but also remove them. Since we control the environment in which our hardware will work, we can remove any of the firmware that will be useless to us:
The next part adds some users. It is interesting in our case to create an ansible user, without password but with a public key. This allows all of our new VMs to be accessible from our Ansible server to run the post-install actions:
# Manage Ansible access
groupadd-g1001ansible
useradd-m-g1001-u1001ansible
mkdir/home/ansible/.ssh
echo-e"<---- PAST YOUR PUBKEY HERE ---->">/home/ansible/.ssh/authorized_keys
chown-Ransible:ansible/home/ansible/.ssh
chmod700/home/ansible/.ssh
chmod600/home/ansible/.ssh/authorized_keys
echo"ansible ALL=(ALL:ALL) NOPASSWD:ALL">/etc/sudoers.d/ansible
chmod440/etc/sudoers.d/ansible
Now we need to enable and start vmtoolsd (the process that manages open-vm-tools). vSphere will detect the IP address after the reboot of the VM.
systemctlenablevmtoolsd
systemctlstartvmtoolsd
The installation process is finished and the VM will reboot.
Remember, we declared in Packer a provisioner, which in our case corresponds to a .sh script, to be stored in a subdirectory next to our json file.
There are different types of provisioners, we could also have used Ansible. You are free to explore these possibilities.
This file can be completely changed, but this provides an example of what can be done with a script, in this case requirements.sh:
#!/bin/sh -euxecho"Updating the system..."
dnf-yupdate
echo"Installing cloud-init..."
dnf-yinstallcloud-init
# see https://bugs.launchpad.net/cloud-init/+bug/1712680# and https://kb.vmware.com/s/article/71264# Virtual Machine customized with cloud-init is set to DHCP after rebootecho"manual_cache_clean: True ">/etc/cloud/cloud.cfg.d/99-manual.cfg
echo"Disable NetworkManager-wait-online.service"
systemctldisableNetworkManager-wait-online.service
# cleanup current SSH keys so templated VMs get fresh key
rm-f/etc/ssh/ssh_host_*
# Avoid ~200 meg firmware package we don't need# this cannot be done in the KS file so we do it hereecho"Removing extra firmware packages"
dnf-yremovelinux-firmware
dnf-yautoremove
echo"Remove previous kernels that preserved for rollbacks"
dnf-yremove-y$(dnfrepoquery--installonly--latest-limit=-1-q)
dnf-ycleanall--enablerepo=\*;echo"truncate any logs that have built up during the install"
find/var/log-typef-exectruncate--size=0{}\;echo"remove the install log"
rm-f/root/anaconda-ks.cfg/root/original-ks.cfg
echo"remove the contents of /tmp and /var/tmp"
rm-rf/tmp/*/var/tmp/*
echo"Force a new random seed to be generated"
rm-f/var/lib/systemd/random-seed
echo"Wipe netplan machine-id (DUID) so machines get unique ID generated on boot"
truncate-s0/etc/machine-id
echo"Clear the history so our install commands aren't there"
rm-f/root/.wget-hsts
exportHISTSIZE=0
Some explanations are necessary:
echo"Installing cloud-init..."
dnf-yinstallcloud-init
# see https://bugs.launchpad.net/cloud-init/+bug/1712680# and https://kb.vmware.com/s/article/71264# Virtual Machine customized with cloud-init is set to DHCP after rebootecho"manual_cache_clean: True">/etc/cloud/cloud.cfg.d/99-manual.cfg
Since vSphere now uses cloud-init via the VMware Tools to configure the network of a centos8 guest machine, it must be installed. However, if you do nothing, the configuration will be applied on the first reboot, and everything will be fine. But on the next reboot, cloud-init will not receive any new information from vSphere. In these cases, without information about what to do, cloud-init will reconfigure the VM's network interface to use DHCP, and you will lose your static configuration.
As this is not the behavior we want, we need to specify to cloud-init not to delete its cache automatically, and therefore to reuse the configuration information it received during its first reboot and each reboot after that.
For this, we create a file /etc/cloud/cloud.cfg.d/99-manual.cfg with the manual_cache_clean: True directive.
Note
This implies that if you need to re-apply a network configuration via vSphere guest customizations (which, in normal use, should be quite rare), you will have to delete the cloud-init cache yourself.
The rest of the script is commented and does not require more details.
You can check the Bento project to get more ideas of what can be done in this part of the automation process.
You can quickly go to vSphere and admire the work.
You will see the machine being created, started, and if you launch a console, you will see the automatic entry of commands and the installation process.
At the end of the creation, you will find your template ready to use in vSphere.
You can store sensitive data in the ./vars/credentials.yml, which you will obviously have encrypted beforehand with ansible-vault (especially if you use git for your work). As everything uses a variable, you can easily make it suit your needs.
If you do not use something like Rundeck or Awx, you can launch the deployment with a command line similar to this one:
It is at this point that you can launch the final configuration of your virtual machine using Ansible. Do not forget to change the root password, secure SSH, register the new VM in your monitoring tool and in your IT inventory, etc.
As we have seen, there are now fully automated DevOps solutions to create and deploy VMs.
At the same time, this represents an undeniable saving of time, especially in cloud or data center environments. It also facilitates a standard compliance across all of the computers in the company (servers and workstations), and an easy maintenance / evolution of templates.
For a detailed project that also covers the deployment of Rocky Linux and other operating systems using the latest in vSphere, Packer, and the Packer Plugin for vSphere, please visit this project.
Author: Antoine Le Morvan
Contributors: Steven Spencer, Ryan Johnson, Pedro Garcia, Ganna Zhyrnova