Proxmox part 9: Virtual Private Cloud (VPC)
In part 2, we set up NAT bridges where the Proxmox host itself performs IP masquerading for VMs on private networks. This is simple and effective, but it means that every VM on a NAT bridge has a direct path to the internet through the host kernel. There is no way to inspect, filter, or control that egress traffic at the VM level.
In this post, we will create a Virtual Private Cloud (VPC): an isolated network where VMs have no direct internet access. The only path to the outside world is through a dedicated router VM that you fully control. This is similar to how AWS VPCs work: you create an isolated network, attach a NAT gateway (our router VM), and only traffic that passes through the gateway can reach the internet.
This is useful for:
- Security sandboxing — run untrusted workloads on a network where you control all egress
- Testing — simulate a production network topology with a router, firewall, and isolated clients
- Multi-tenant isolation — give each tenant their own VPC with a dedicated router
Unlike part 6 (which builds a full home LAN router with physical NIC passthrough), this setup is purely virtual. No special hardware is required — just a standard Proxmox installation.
Architecture
Internet
|
[vmbr0]
| |
Proxmox Host Router VM
(management only) net0: vmbr0 (internet)
10.99.0.2/24 net1: vmbr99 (VPC gateway)
| |
[vmbr99] — VPC Bridge (no host NAT)
|
Client VM
net0: vmbr99 (VPC only)
The Proxmox host has a management IP on the VPC bridge (10.99.0.2)
so you can SSH to VMs for administration, but the host does not
perform any masquerading or IP forwarding for the VPC network. The
only way a client VM can reach the internet is through the router VM.
Prerequisites
- Proxmox installed (part 1)
- SSH access to the Proxmox host as
root
Download the script
Connect to the Proxmox host via SSH and download the proxmox_vpc.sh
script:
wget https://raw.githubusercontent.com/EnigmaCurry/blog.rymcg.tech/master/src/proxmox/proxmox_vpc.sh
chmod +x proxmox_vpc.sh
Running the script without arguments shows the available commands and current configuration:
./proxmox_vpc.sh
Configuration
All settings are controlled by environment variables with sensible
defaults. You can see the full list and their current values by
running ./proxmox_vpc.sh with no arguments. To override settings,
export them before running commands:
## VPC Bridge:
export VPC_BRIDGE=vmbr99
export VPC_HOST_CIDR=10.99.0.2/24
## Router VM:
export ROUTER_VM_ID=200
export ROUTER_HOSTNAME=router
export ROUTER_DISK_SIZE=32G
export ROUTER_MEMORY=2048
export ROUTER_CORES=1
export PUBLIC_BRIDGE=vmbr0
## Client VM:
export CLIENT_VM_ID=201
export CLIENT_HOSTNAME=client
export CLIENT_DISK_SIZE=32G
export CLIENT_MEMORY=2048
export CLIENT_CORES=1
## Storage:
export STORAGE=local-lvm
Create the VPC
Create the private bridge:
./proxmox_vpc.sh create_vpc
This creates a Linux bridge (vmbr99 by default) with:
bridge_ports none— not connected to any physical interface- A management IP for the Proxmox host (
10.99.0.2/24) - No masquerade rules — the host will not NAT traffic for this bridge
- No ip_forward — the host will not route traffic between this
bridge and
vmbr0
This is the key difference from part 2’s NAT bridges. The VPC bridge is just a Layer 2 switch. It connects VMs to each other, but provides no path to the internet on its own.
Create the router VM
The router VM has one foot in each network: vmbr0 for internet
access and the VPC bridge for the private side.
./proxmox_vpc.sh create_router
This creates a VM with:
- net0 on
vmbr0(internet-facing) - net1 on
vmbr99(VPC private side) - A blank disk (no OS installed)
Load an OS ISO
The VM is created with an empty CD/DVD drive. In the Proxmox GUI:
- Select VM 200 (router)
- Go to Hardware → double-click the CD/DVD Drive
- Select an ISO image (any Linux distribution will work)
- Go to Options → Boot Order and ensure the CD/DVD drive is first for the initial install
Install nifty-filter on the router VM
nifty-filter is an
immutable NixOS router distribution that provides everything the
router VM needs: nftables firewall, DHCP server, DNS server, and
network routing. It runs on a read-only root filesystem with
configuration stored on a read-write /var partition.
Build the ISO
On a machine with Nix installed, clone and build the nifty-filter ISO:
git clone https://github.com/EnigmaCurry/nifty-filter.git
cd nifty-filter
nix build .#iso
Upload the resulting ISO to the Proxmox host’s ISO storage (or use the Proxmox GUI to upload it).
Install nifty-filter
- Load the nifty-filter ISO into the router VM’s CD/DVD drive
- Start the VM and open the console
- Log in with the default credentials:
admin/nifty - Run the interactive installer:
nifty-install
The installer will prompt you for:
- Hostname — e.g.,
router - Disk — select the virtual disk to install to
- WAN interface — the upstream interface (net0, connected to
vmbr0) - LAN interface — the VPC interface (net1, connected to the VPC bridge)
- Subnet configuration — use
10.99.0.1/24for the VPC side - DNS servers — upstream DNS resolvers
After installation, reboot the VM. nifty-filter will automatically configure IP forwarding, nftables masquerade, DHCP, and DNS for the VPC network.
Configure nifty-filter
After the initial install, you can reconfigure at any time:
# Run this inside the router VM:
nifty-config
Or edit the configuration files directly in /var/nifty-filter/:
router.env— firewall rules and interface configurationdhcp.env— DHCP pool settings and DNS configuration
Create a client VM
Back on the Proxmox host, create a client VM that is isolated on the VPC:
./proxmox_vpc.sh create_vm
This creates a VM with:
- net0 on
vmbr99— the only network interface, connected exclusively to the VPC bridge - A blank disk (no OS installed)
Load an OS ISO into the CD/DVD drive in the Proxmox GUI and install the OS, just as you did for the router. The nifty-filter DHCP server will automatically assign an IP address and configure the default gateway, so the client should be able to use DHCP with no additional configuration.
Creating additional client VMs
To create more VMs on the same VPC, override the VM ID and hostname:
CLIENT_VM_ID=202 CLIENT_HOSTNAME=client2 ./proxmox_vpc.sh create_vm
CLIENT_VM_ID=203 CLIENT_HOSTNAME=client3 ./proxmox_vpc.sh create_vm
Testing
Once both VMs are running, verify the setup from inside the client VM:
# Run this inside the client VM:
## Test connectivity to the router:
ping -c 3 10.99.0.1
## Test internet access through the router:
ping -c 3 1.1.1.1
## Test DNS resolution:
ping -c 3 google.com
Verify isolation
The client VM should not be able to reach the Proxmox host’s
management network directly. The host’s vmbr0 address is on a
different network, and since there is no masquerade or ip_forward on
the host for the VPC bridge, the client’s only path out is through
the router VM.
You can verify this by checking the routing table on the client:
# Run this inside the client VM:
ip route
The default route should point to 10.99.0.1 (the router VM), not
to the Proxmox host.
The script
#!/bin/bash
## Create a Virtual Private Cloud (VPC) on Proxmox
## See https://blog.rymcg.tech/blog/proxmox/09-vpc/
## VPC Bridge configuration:
VPC_BRIDGE=${VPC_BRIDGE:-vmbr99}
VPC_HOST_CIDR=${VPC_HOST_CIDR:-10.99.0.2/24}
## Router VM configuration:
ROUTER_VM_ID=${ROUTER_VM_ID:-200}
ROUTER_HOSTNAME=${ROUTER_HOSTNAME:-router}
ROUTER_DISK_SIZE=${ROUTER_DISK_SIZE:-32G}
ROUTER_MEMORY=${ROUTER_MEMORY:-2048}
ROUTER_CORES=${ROUTER_CORES:-1}
PUBLIC_BRIDGE=${PUBLIC_BRIDGE:-vmbr0}
## Client VM configuration:
CLIENT_VM_ID=${CLIENT_VM_ID:-201}
CLIENT_HOSTNAME=${CLIENT_HOSTNAME:-client}
CLIENT_DISK_SIZE=${CLIENT_DISK_SIZE:-32G}
CLIENT_MEMORY=${CLIENT_MEMORY:-2048}
CLIENT_CORES=${CLIENT_CORES:-1}
## Shared configuration:
STORAGE=${STORAGE:-local-lvm}
_confirm() {
set +x
test ${YES:-no} == "yes" && return 0
default=$1; prompt=$2; question=${3:-". Proceed?"}
if [[ $default == "y" || $default == "yes" ]]; then
dflt="Y/n"
else
dflt="y/N"
fi
read -p "${prompt}${question} (${dflt}): " answer
answer=${answer:-${default}}
if [[ ${answer,,} == "y" || ${answer,,} == "yes" ]]; then
return 0
else
echo "Canceled."
return 1
fi
}
create_vpc() {
set -e
echo "Creating VPC bridge ${VPC_BRIDGE} ..."
## Check if bridge already exists:
if pvesh get /nodes/${HOSTNAME}/network/${VPC_BRIDGE} >/dev/null 2>&1; then
echo "Bridge ${VPC_BRIDGE} already exists."
return 0
fi
pvesh create /nodes/${HOSTNAME}/network \
--iface ${VPC_BRIDGE} \
--type bridge \
--cidr ${VPC_HOST_CIDR} \
--autostart 1 \
--comments "VPC private bridge - no NAT"
pvesh set /nodes/${HOSTNAME}/network
echo
echo "Created VPC bridge: ${VPC_BRIDGE}"
echo " Host management IP: ${VPC_HOST_CIDR}"
echo " No masquerade, no ip_forward — routing is handled by the router VM."
}
create_router() {
set -e
echo "Creating router VM ${ROUTER_VM_ID} (${ROUTER_HOSTNAME}) ..."
qm create ${ROUTER_VM_ID} \
--name "${ROUTER_HOSTNAME}" \
--sockets ${ROUTER_CORES} \
--memory ${ROUTER_MEMORY} \
--bios ovmf \
--efidisk0 ${STORAGE}:0,efitype=4m,pre-enrolled-keys=0 \
--machine q35 \
--net0 "virtio,bridge=${PUBLIC_BRIDGE}" \
--net1 "virtio,bridge=${VPC_BRIDGE}" \
--scsihw virtio-scsi-pci \
--ide2 none,media=cdrom \
--onboot 1
## Allocate a blank disk:
pvesh create /nodes/${HOSTNAME}/storage/${STORAGE}/content \
--vmid ${ROUTER_VM_ID} \
--filename vm-${ROUTER_VM_ID}-disk-0 \
--size ${ROUTER_DISK_SIZE} \
--format raw
qm set ${ROUTER_VM_ID} --scsi0 ${STORAGE}:vm-${ROUTER_VM_ID}-disk-0
qm set ${ROUTER_VM_ID} --boot order=scsi0
echo
echo "Created router VM: ${ROUTER_VM_ID} (${ROUTER_HOSTNAME})"
echo " net0: ${PUBLIC_BRIDGE} (internet-facing)"
echo " net1: ${VPC_BRIDGE} (VPC private side)"
echo " Disk: ${ROUTER_DISK_SIZE} on ${STORAGE}"
echo
echo "Next steps:"
echo " 1. Load an OS ISO into the CD/DVD drive via the Proxmox GUI"
echo " 2. Start the VM and install the OS"
echo " 3. Configure NAT/masquerade inside the router (see blog post)"
}
create_vm() {
set -e
echo "Creating client VM ${CLIENT_VM_ID} (${CLIENT_HOSTNAME}) ..."
qm create ${CLIENT_VM_ID} \
--name "${CLIENT_HOSTNAME}" \
--sockets ${CLIENT_CORES} \
--memory ${CLIENT_MEMORY} \
--bios ovmf \
--efidisk0 ${STORAGE}:0,efitype=4m,pre-enrolled-keys=0 \
--machine q35 \
--net0 "virtio,bridge=${VPC_BRIDGE}" \
--scsihw virtio-scsi-pci \
--ide2 none,media=cdrom \
--onboot 1
## Allocate a blank disk:
pvesh create /nodes/${HOSTNAME}/storage/${STORAGE}/content \
--vmid ${CLIENT_VM_ID} \
--filename vm-${CLIENT_VM_ID}-disk-0 \
--size ${CLIENT_DISK_SIZE} \
--format raw
qm set ${CLIENT_VM_ID} --scsi0 ${STORAGE}:vm-${CLIENT_VM_ID}-disk-0
qm set ${CLIENT_VM_ID} --boot order=scsi0
echo
echo "Created client VM: ${CLIENT_VM_ID} (${CLIENT_HOSTNAME})"
echo " net0: ${VPC_BRIDGE} (VPC only — isolated from internet)"
echo " Disk: ${CLIENT_DISK_SIZE} on ${STORAGE}"
echo
echo "Next steps:"
echo " 1. Attach an OS ISO via the Proxmox GUI (Hardware > CD/DVD Drive)"
echo " 2. Start the VM and install the OS"
echo " 3. Set the default gateway to the router's VPC IP address"
}
create_all() {
create_vpc
echo
create_router
echo
create_vm
echo
echo "=== VPC setup complete ==="
echo "Attach OS ISOs to both VMs via the Proxmox GUI, then start them."
}
status() {
echo "=== VPC Bridge ==="
if pvesh get /nodes/${HOSTNAME}/network/${VPC_BRIDGE} >/dev/null 2>&1; then
pvesh get /nodes/${HOSTNAME}/network/${VPC_BRIDGE} --output-format=yaml 2>/dev/null
else
echo "Bridge ${VPC_BRIDGE} does not exist."
fi
echo
echo "=== Router VM (${ROUTER_VM_ID}) ==="
if qm status ${ROUTER_VM_ID} >/dev/null 2>&1; then
qm status ${ROUTER_VM_ID}
qm config ${ROUTER_VM_ID} | grep -E "^(name|net[0-9]|scsi[0-9]|memory|sockets):"
else
echo "VM ${ROUTER_VM_ID} does not exist."
fi
echo
echo "=== Client VM (${CLIENT_VM_ID}) ==="
if qm status ${CLIENT_VM_ID} >/dev/null 2>&1; then
qm status ${CLIENT_VM_ID}
qm config ${CLIENT_VM_ID} | grep -E "^(name|net[0-9]|scsi[0-9]|memory|sockets):"
else
echo "VM ${CLIENT_VM_ID} does not exist."
fi
}
destroy() {
echo "This will destroy the following resources:"
echo " - VM ${ROUTER_VM_ID} (${ROUTER_HOSTNAME})"
echo " - VM ${CLIENT_VM_ID} (${CLIENT_HOSTNAME})"
echo " - Bridge ${VPC_BRIDGE}"
echo
_confirm no "Are you sure you want to destroy the VPC" "?" || return 1
echo "Destroying VPC ..."
for VM_ID in ${ROUTER_VM_ID} ${CLIENT_VM_ID}; do
if qm status ${VM_ID} >/dev/null 2>&1; then
qm stop ${VM_ID} 2>/dev/null || true
qm destroy ${VM_ID} --purge
echo "Destroyed VM ${VM_ID}"
else
echo "VM ${VM_ID} does not exist, skipping."
fi
done
if pvesh get /nodes/${HOSTNAME}/network/${VPC_BRIDGE} >/dev/null 2>&1; then
pvesh delete /nodes/${HOSTNAME}/network/${VPC_BRIDGE}
pvesh set /nodes/${HOSTNAME}/network
echo "Destroyed bridge ${VPC_BRIDGE}"
else
echo "Bridge ${VPC_BRIDGE} does not exist, skipping."
fi
echo
echo "VPC destroyed."
}
if [[ $# == 0 ]]; then
echo "# Documentation: https://blog.rymcg.tech/blog/proxmox/09-vpc/"
echo
echo "Usage: $0 <command>"
echo
echo "Commands:"
echo " create_vpc Create the VPC private bridge"
echo " create_router Create the router VM (two NICs)"
echo " create_vm Create a client VM (VPC only)"
echo " create_all Create VPC bridge + router + client"
echo " status Show VPC status"
echo " destroy Tear down VPC, router, and client"
echo
echo "Environment variables (current values):"
echo " VPC_BRIDGE=${VPC_BRIDGE} VPC_HOST_CIDR=${VPC_HOST_CIDR}"
echo " PUBLIC_BRIDGE=${PUBLIC_BRIDGE}"
echo " ROUTER_VM_ID=${ROUTER_VM_ID} ROUTER_HOSTNAME=${ROUTER_HOSTNAME}"
echo " ROUTER_DISK_SIZE=${ROUTER_DISK_SIZE} ROUTER_MEMORY=${ROUTER_MEMORY}"
echo " CLIENT_VM_ID=${CLIENT_VM_ID} CLIENT_HOSTNAME=${CLIENT_HOSTNAME}"
echo " CLIENT_DISK_SIZE=${CLIENT_DISK_SIZE} CLIENT_MEMORY=${CLIENT_MEMORY}"
echo " STORAGE=${STORAGE}"
exit 1
elif [[ $# -gt 1 ]]; then
shift
echo "Invalid arguments: $@"
exit 1
else
"$@"
fi
Cleanup
To tear down the entire VPC (both VMs and the bridge):
./proxmox_vpc.sh destroy
This will stop and destroy both VMs, then remove the VPC bridge. You will be prompted for confirmation before any resources are deleted.
You can discuss this blog on Matrix (Element): #blog-rymcg-tech:enigmacurry.com
This blog is copyright EnigmaCurry and dual-licensed CC-BY-SA and MIT. The source is on github: enigmacurry/blog.rymcg.tech and PRs are welcome. ❤️