Proxmox part 5: KVM and Cloud-Init
This post introduces a shell script to create KVM virtual machine templates on Proxmox.
KVM?
According to Wikipedia:
Kernel-based Virtual Machine (KVM) is a virtualization module in the
Linux kernel that allows the kernel to function as a hypervisor.
With KVM you can create virtual machines that are hardware accelerated. Unlike a container, a virtual machine boots its own virtual hardware (CPU, memory, disk, etc). Each KVM virtual machine is running its own (Linux) kernel and is isolated from the host operating system.
The main advantages of a virtual machine are greater isolation and the ability to run any operating system. (Whereas a container is limited to running under the exact same Linux kernel as the host.)
Proxmox supports both KVM virtual machines and LXC containers. Containers were covered in part 4. This post will cover building KVM templates.
Cloud-Init?
Another advantage of KVM is the ability to use cloud images (using cloud-init) to be able to customize the username and SSH keys, and custom scripts for installing additional software. Cloud-Init will handle all of the configuration on the first boot of the VM.
Install the script
Login to your Proxmox server as the root user via SSH.
Download the script:
wget https://raw.githubusercontent.com/EnigmaCurry/blog.rymcg.tech/master/src/proxmox/proxmox_kvm.sh
Read all of the comments, and then edit the variables at the top of the script to change any defaults you desire. You can also override the configuration defaults in your parent shell environment as will be shown.
Make the script executable:
chmod a+x proxmox_kvm.sh
Warnings for alternative proxmox storage backends (NFS)
This script is setup by default for the local-lvm
storage pool. If
that’s what you want, skip reading this section. You can also use
local-zfs
, by setting STORAGE=local-zfs
. NFS storage can be used,
with caveats. Other filesystems like ceph or gluster have not been
tested. Finally, the Proxmox OS drive (local
) should never be used
for storing VMs. If you want to use anything other than local-lvm
,
you must change the STORAGE
variable, as shown in all examples.
You can store KVM templates on any storage pool that is tagged for the
Disk Image
content type (by default, only local-lvm
is set this
way). If you have added an NFS storage backend (and tagged it for the
Disk Image
content type), you may encounter this error when creating
the final VM template (with qm template {TEMPLATE_ID}
):
## Error you may see if using NFS or another alternative storage backend:
/usr/bin/chattr: Operation not supported while reading flags on /mnt/pve/{STORAGE}/images/{TEMPLATE_ID}/base-{TEMPLATE_ID}-disk-0.raw
This is because NFS does not support immutable files, but this is not especially important as long as Proxmox is the only client of this storage pool. So, this error may be ignored.
The examples below assume that you are using STORAGE=local-lvm
, but
you may change this to any other compatible storage pool name.
If you do change the default STORAGE
, please note that the DISK
parameter might need slight tweaking as well, as shown in the script:
## Depending on the storage backend, the DISK path may differ slightly:
if [ "${STORAGE_TYPE}" == 'nfs' ]; then
# nfs path:
DISK="${STORAGE}:${TEMPLATE_ID}/vm-${TEMPLATE_ID}-disk-0.raw"
elif [ "${STORAGE_TYPE}" == 'local' ]; then
# lvm path:
DISK="${STORAGE}:vm-${TEMPLATE_ID}-disk-0"
else
echo "only 'local' (lvm) or 'nfs' storage backends are supported at this time"
exit 1
fi
Be sure to set STORAGE_TYPE
to local
if you’re using the local-lvm backend
or to nfs
if you’re using the NFS backend. If you’re using any other
storage backend, you may need to tweak the DISK
parameter and alter this
if
statement accordingly. I don’t know why the naming is different between
storage backends (if you do, please file an issue),
but what I do know is that it’s very annoying. I don’t have a good solution
here other than to hardcode the path differences into an if statement and
to document the issue here.
Creating KVM templates
You can create templates for every Operating System you wish to run.
In order to follow along with this blog series, you should create all
of the following templates with the same TEMPLATE_ID
shown, as these
templates will be used in subsequent posts (you’ll need at least the
ones for Arch Linux (9000
), Debian (9001
), and Docker (9998
)).
Arch Linux
DISTRO=arch TEMPLATE_ID=9000 STORAGE_TYPE=local STORAGE=local-lvm ./proxmox_kvm.sh template
Debian (12; bookworm)
DISTRO=debian TEMPLATE_ID=9001 STORAGE_TYPE=local STORAGE=local-lvm ./proxmox_kvm.sh template
Ubuntu (jammy; 22.04 LTS)
DISTRO=ubuntu TEMPLATE_ID=9002 STORAGE_TYPE=local STORAGE=local-lvm ./proxmox_kvm.sh template
Fedora (40)
DISTRO=fedora TEMPLATE_ID=9003 STORAGE_TYPE=local STORAGE=local-lvm ./proxmox_kvm.sh template
Docker
You can install Docker on any of the supported distributions. Pass the
INSTALL_DOCKER=yes
variable to attach a small install script to the
VM so that it automatically installs Docker on first boot, via
cloud-init:
VM_HOSTNAME=docker \
DISTRO=debian \
TEMPLATE_ID=9998 \
INSTALL_DOCKER=yes \
STORAGE_TYPE=local \
STORAGE=local-lvm \
./proxmox_kvm.sh template
FreeBSD (13)
FreeBSD does not allow root login, so you must choose an alternate VM_USER
:
DISTRO=freebsd TEMPLATE_ID=9004 STORAGE=local-lvm VM_USER=fred ./proxmox_kvm.sh template
Any other cloud image
You can use any other generic cloud image directly by setting
IMAGE_URL
. For example, this script knows nothing about OpenBSD, but
you can find a third party cloud image from this
website, and so you can use their image
with this script:
DISTRO=OpenBSD \
TEMPLATE_ID=9999 \
VM_USER=fred \
STORAGE=local-lvm \
IMAGE_URL=https://object-storage.public.mtl1.vexxhost.net/swift/v1/1dbafeefbd4f4c80864414a441e72dd2/bsd-cloud-image.org/images/openbsd/7.0/2021-12-11/openbsd-7.0.qcow2 \
./proxmox_kvm.sh template
Creating new virtual machines by cloning these templates
This script uses a custom cloud-init User Data template that is copied
to /var/lib/vz/snippets/vm-${VM_ID}-user-data.yml
which means that
you cannot use the Proxmox GUI to edit cloud-init data. Therefore,
this script encapsulates this logic for you, and makes it easy to
clone the template:
TEMPLATE_ID=9000 \
VM_ID=100 \
VM_HOSTNAME=my_arch \
./proxmox_kvm.sh clone
Start the VM whenever you’re ready:
qm start 100
cloud-init will run the first time the VM boots. This will install the Qemu guest agent, which may take a few minutes.
Wait a bit for the boot to finish, then find out what the IP address is:
VM_ID=100 ./proxmox_kvm.sh get_ip
The script
#!/bin/bash
## Create Proxmox KVM templates from cloud images
## See https://blog.rymcg.tech/blog/proxmox/05-kvm-templates/
## Specify DISTRO and the latest image will be discovered automatically:
DISTRO=${DISTRO:-arch}
## Alternatively, specify IMAGE_URL to the full URL of the cloud image:
#IMAGE_URL=https://mirror.pkgbuild.com/images/latest/Arch-Linux-x86_64-cloudimg.qcow2
## To configure DISK path correctly, set STORAGE_TYPE to "nfs" or "local"
## (no other storage backends supported at this time)
STORAGE_TYPE=${STORAGE_TYPE:-local}
## The ID of the storage to create the disk in
STORAGE=${STORAGE:-local-lvm}
## Set these variables to configure the container:
## (All variables can be overriden from the parent environment)
TEMPLATE_ID=${TEMPLATE_ID:-9001}
VM_ID=${VM_ID:-100}
VM_HOSTNAME=${VM_HOSTNAME:-$(echo ${DISTRO} | cut -d- -f1)}
VM_USER=${VM_USER:-root}
VM_PASSWORD=${VM_PASSWORD:-""}
PUBLIC_PORTS_TCP=${PUBLIC_PORTS_TCP:-22,80,443}
PUBLIC_PORTS_UDP=${PUBLIC_PORTS_UDP}
## Point to the local authorized_keys file to copy into VM:
SSH_KEYS=${SSH_KEYS:-${HOME}/.ssh/authorized_keys}
# Container CPUs:
NUM_CORES=${NUM_CORES:-1}
# Container RAM in MB:
MEMORY=${MEMORY:-2048}
# Container swap size in MB:
SWAP_SIZE=${SWAP_SIZE:-${MEMORY}}
# Container root filesystem size in GB:
FILESYSTEM_SIZE=${FILESYSTEM_SIZE:-50}
INSTALL_DOCKER=${INSTALL_DOCKER:-no}
START_ON_BOOT=${START_ON_BOOT:-1}
## Depending on the storage backend, the DISK path may differ slightly:
if [ "${STORAGE_TYPE}" == 'nfs' ]; then
# nfs path:
DISK="${STORAGE}:${TEMPLATE_ID}/vm-${TEMPLATE_ID}-disk-0.raw"
elif [ "${STORAGE_TYPE}" == 'local' ]; then
# lvm path:
DISK="${STORAGE}:vm-${TEMPLATE_ID}-disk-0"
else
echo "only 'local' (lvm) or 'nfs' storage backends are supported at this time"
exit 1
fi
PUBLIC_BRIDGE=${PUBLIC_BRIDGE:-vmbr0}
SNIPPETS_DIR=${SNIPPETS_DIR:-/var/lib/vz/snippets}
_confirm() {
set +x
test ${YES:-no} == "yes" && return 0
default=$1; prompt=$2; question=${3:-". Proceed?"}
if [[ $default == "y" || $default == "yes" ]]; then
dflt="Y/n"
else
dflt="y/N"
fi
read -p "${prompt}${question} (${dflt}): " answer
answer=${answer:-${default}}
if [[ ${answer,,} == "y" || ${answer,,} == "yes" ]]; then
return 0
else
echo "Exiting."
[[ "$0" = "$BASH_SOURCE" ]] && exit 1 || return 1
fi
}
template() {
set -e
USER_DATA_RUNCMD=()
(set -x; qm create ${TEMPLATE_ID})
if [[ -v IMAGE_URL ]]; then
_template_from_url ${IMAGE_URL}
else
if [[ ${DISTRO} == "arch" ]] || [[ ${DISTRO} == "archlinux" ]]; then
_template_from_url https://mirror.pkgbuild.com/images/latest/Arch-Linux-x86_64-cloudimg.qcow2
USER_DATA_RUNCMD+=("rm -rf /etc/pacman.d/gnupg"
"pacman-key --init"
"pacman-key --populate archlinux"
"pacman -Syu --noconfirm"
"pacman -S --noconfirm qemu-guest-agent"
"systemctl start qemu-guest-agent"
"sed -i -e 's/^#\?GRUB_TERMINAL_INPUT=.*/GRUB_TERMINAL_INPUT=\"console serial\"/' -e 's/^#\?GRUB_TERMINAL_OUTPUT=.*/GRUB_TERMINAL_OUTPUT=\"console serial\"/' -e 's/^#\?GRUB_CMDLINE_LINUX_DEFAULT=.*/GRUB_CMDLINE_LINUX_DEFAULT=\"rootflags=compress-force=zstd console=tty0 console=ttyS0,115200\"/' /etc/default/grub"
"sh -c \"echo 'GRUB_SERIAL_COMMAND=\\\"serial --unit=0 --speed=115200\\\"' >> /etc/default/grub\""
"grub-mkconfig -o /boot/grub/grub.cfg"
)
elif [[ ${DISTRO} == "debian" ]] || [[ ${DISTRO} == "bookworm" ]]; then
_template_from_url https://cloud.debian.org/images/cloud/bookworm/latest/debian-12-genericcloud-amd64.qcow2
USER_DATA_RUNCMD+=("apt-get update"
"apt-get install -y qemu-guest-agent"
"systemctl start qemu-guest-agent"
)
elif [[ ${DISTRO} == "bullseye" ]]; then
_template_from_url https://cloud.debian.org/images/cloud/bullseye/latest/debian-11-genericcloud-amd64.qcow2
USER_DATA_RUNCMD+=("apt-get update"
"apt-get install -y qemu-guest-agent"
"systemctl start qemu-guest-agent"
)
elif [[ ${DISTRO} == "ubuntu" ]] || [[ ${DISTRO} == "jammy" ]]; then
_template_from_url https://cloud-images.ubuntu.com/jammy/current/jammy-server-cloudimg-amd64.img
USER_DATA_RUNCMD+=("apt-get update"
"apt-get install -y qemu-guest-agent"
"systemctl start qemu-guest-agent"
)
elif [[ ${DISTRO} == "focal" ]]; then
_template_from_url https://cloud-images.ubuntu.com/focal/current/focal-server-cloudimg-amd64.img
USER_DATA_RUNCMD+=("apt-get update"
"apt-get install -y qemu-guest-agent"
"systemctl start qemu-guest-agent"
)
elif [[ ${DISTRO} == "fedora" ]] || [[ ${DISTRO} == "fedora-41" ]]; then
_template_from_url https://download.fedoraproject.org/pub/fedora/linux/releases/41/Cloud/x86_64/images/Fedora-Cloud-Base-Generic-41-1.4.x86_64.qcow2
USER_DATA_RUNCMD+=("sh -c \"echo PasswordAuthentication no > /etc/ssh/sshd_config.d/00-no-passwords.conf\""
"systemctl restart sshd"
)
elif [[ ${DISTRO} == "fedora-40" ]]; then
_template_from_url https://download.fedoraproject.org/pub/fedora/linux/releases/40/Cloud/x86_64/images/Fedora-Cloud-Base-Generic.x86_64-40-1.14.qcow2
USER_DATA_RUNCMD+=("sh -c \"echo PasswordAuthentication no > /etc/ssh/sshd_config.d/00-no-passwords.conf\""
"systemctl restart sshd"
)
elif [[ ${DISTRO} == "freebsd" ]] || [[ ${DISTRO} == "freebsd-13" ]]; then
if [[ ${VM_USER} == "root" ]]; then
echo "For FreeBSD, VM_USER cannot be root. Use another username."
qm destroy ${TEMPLATE_ID}
exit 1
fi
# There's a lot more images to try here: https://bsd-cloud-image.org/
_template_from_url https://object-storage.public.mtl1.vexxhost.net/swift/v1/1dbafeefbd4f4c80864414a441e72dd2/bsd-cloud-image.org/images/freebsd/13.2/2023-04-21/zfs/freebsd-13.2-zfs-2023-04-21.qcow2
else
echo "DISTRO '${DISTRO}' is not supported by this script yet."
exit 1
fi
fi
(
set -ex
qm set "${TEMPLATE_ID}" \
--name "${VM_HOSTNAME}" \
--sockets "${NUM_CORES}" \
--memory "${MEMORY}" \
--net0 "virtio,bridge=${PUBLIC_BRIDGE}" \
--scsihw virtio-scsi-pci \
--scsi0 "${DISK}" \
--ide2 ${STORAGE}:cloudinit \
--sshkey "${SSH_KEYS}" \
--ipconfig0 ip=dhcp \
--boot c \
--bootdisk scsi0 \
--serial0 socket \
--vga serial0 \
--agent 1
pvesh set /nodes/${HOSTNAME}/qemu/${TEMPLATE_ID}/firewall/options --enable 1
pvesh create /nodes/${HOSTNAME}/qemu/${TEMPLATE_ID}/firewall/rules \
--action ACCEPT --type in --macro ping --enable 1
IFS=',' read -ra PORTS <<< "${PUBLIC_PORTS_TCP}"
for PORT in "${PORTS[@]}"; do
pvesh create /nodes/${HOSTNAME}/qemu/${TEMPLATE_ID}/firewall/rules --action ACCEPT --type in --proto tcp --dport "${PORT}" --enable 1
done
IFS=',' read -ra UDP_PORTS <<< "${PUBLIC_PORTS_UDP}"
for PORT in "${UDP_PORTS[@]}"; do
pvesh create /nodes/${HOSTNAME}/qemu/${TEMPLATE_ID}/firewall/rules --action ACCEPT --type in --proto udp --dport "${PORT}" --enable 1
done
## Generate cloud-init User Data script:
if [[ "${INSTALL_DOCKER}" == "yes" ]]; then
## Attach the Docker install script as Cloud-Init User Data so
## that it is installed automatically on first boot:
USER_DATA_RUNCMD+=("sh -c 'curl -sSL https://get.docker.com | sh'")
fi
mkdir -p ${SNIPPETS_DIR}
USER_DATA=${SNIPPETS_DIR}/vm-template-${TEMPLATE_ID}-user-data.yaml
cat <<EOF > ${USER_DATA}
#cloud-config
fqdn: ${VM_HOSTNAME}
ssh_pwauth: false
users:
- name: ${VM_USER}
gecos: ${VM_USER}
groups: docker
ssh_authorized_keys:
$(cat ${SSH_KEYS} | grep -E "^ssh" | xargs -iXX echo " - XX")
runcmd:
EOF
for cmd in "${USER_DATA_RUNCMD[@]}"; do
echo " - ${cmd}" >> ${USER_DATA}
done
qm set "${TEMPLATE_ID}" --cicustom "user=local:snippets/vm-template-${TEMPLATE_ID}-user-data.yaml"
## Resize filesystem and turn into a template:
qm resize "${TEMPLATE_ID}" scsi0 "+${FILESYSTEM_SIZE}G"
## chattr +i will fail on NFS but don't worry about it:
qm template "${TEMPLATE_ID}"
)
}
clone() {
set -e
qm clone "${TEMPLATE_ID}" "${VM_ID}" --full 0
USER_DATA=vm-${VM_ID}-user-data.yaml
cp ${SNIPPETS_DIR}/vm-template-${TEMPLATE_ID}-user-data.yaml ${SNIPPETS_DIR}/${USER_DATA}
sed -i "s/^fqdn:.*/fqdn: ${VM_HOSTNAME}/" ${SNIPPETS_DIR}/${USER_DATA}
if [[ -v VM_PASSWORD ]]; then
cat <<EOF >> ${SNIPPETS_DIR}/${USER_DATA}
chpasswd:
expire: false
list:
- ${VM_USER}:${VM_PASSWORD}
EOF
fi
qm set "${VM_ID}" \
--name "${VM_HOSTNAME}" \
--sockets "${NUM_CORES}" \
--memory "${MEMORY}" \
--onboot "${START_ON_BOOT}" \
--cicustom "user=local:snippets/${USER_DATA}"
#qm snapshot "${VM_ID}" init
echo "Cloned VM ${VM_ID} from template ${TEMPLATE_ID}. To start it, run:"
echo " qm start ${VM_ID}"
}
get_ip() {
set -eo pipefail
## Get the IP address through the guest agent
if ! command -v jq >/dev/null; then apt install -y jq; fi
pvesh get nodes/${HOSTNAME}/qemu/${VM_ID}/agent/network-get-interfaces --output-format=json | jq -r '.result[] | select(.name | test("eth0")) | ."ip-addresses"[] | select(."ip-address-type" | test("ipv4")) | ."ip-address"'
}
_template_from_url() {
set -e
IMAGE_URL=$1
IMAGE=${IMAGE_URL##*/}
TMP=/tmp/kvm-images
mkdir -p ${TMP}
cd ${TMP}
test -f ${IMAGE} || wget ${IMAGE_URL}
qm importdisk ${TEMPLATE_ID} ${IMAGE} ${STORAGE}
}
if [[ $# == 0 ]]; then
echo "# Documentation: https://blog.rymcg.tech/blog/proxmox/05-kvm-templates/"
echo "Commands:"
echo " template"
echo " clone"
echo " get_ip"
exit 1
elif [[ $# > 1 ]]; then
shift
echo "Invalid arguments: $@"
exit 1
else
"$@"
fi
You can discuss this blog on Matrix (Element): #blog-rymcg-tech:enigmacurry.com
This blog is copyright EnigmaCurry and dual-licensed CC-BY-SA and MIT. The source is on github: enigmacurry/blog.rymcg.tech and PRs are welcome. ❤️