Skip to content

Encrypting Libvirt VM Storage with LUKS

I recently needed to encrypt a dedicated disk used for VM storage on a libvirt/virt-manager setup. While my system disk was already encrypted, the VM storage disk was sitting there in plain text, which felt like a security gap worth closing.

The Setup

I had a typical libvirt configuration with an LVM-based storage pool on a dedicated NVMe disk. The storage pool was configured as:

Physical Disk -> LVM PV -> VG "vms" -> LV (VM disks)

The goal was to add LUKS encryption while maintaining the LVM setup and ensuring automatic unlock at boot.

Challenges

Unlike encrypting a system disk during installation, encrypting an existing VM storage disk requires:

  1. Migrating existing VMs off the disk
  2. Destroying and recreating the LVM stack
  3. Configuring automatic unlock without manual intervention
  4. Ensuring the encryption key is securely backed up

Implementation Approach

The strategy was to use LUKS-on-LVM, where the entire physical disk is encrypted before LVM sees it:

Physical Disk -> LUKS -> LVM PV -> VG "vms" -> LV (VM disks)

This provides full-disk encryption with a single unlock operation, rather than encrypting individual VM volumes.

Step-by-Step Process

1. Backup Existing VMs

First, I migrated the VM disks to temporary storage on the encrypted root filesystem:

# Create backup directory on encrypted root
mkdir -p ~/vm-backup

# Shutdown VM
virsh shutdown my-vm

# Backup the logical volume
sudo dd if=/dev/vms/my-vm-disk of=~/vm-backup/my-vm-disk.img bs=4M status=progress

2. Dismantle LVM Stack

# Remove logical volumes
sudo lvremove /dev/vms/my-vm-disk

# Remove volume group
sudo vgremove vms

# Remove physical volume
sudo pvremove /dev/nvme0n1

3. Encrypt the Disk

Generate a random key file for automatic unlocking:

# Create secure directory for keys
sudo mkdir -p /etc/cryptkeys
sudo chmod 700 /etc/cryptkeys

# Generate random key
sudo dd if=/dev/urandom of=/etc/cryptkeys/vms.key bs=512 count=8
sudo chmod 000 /etc/cryptkeys/vms.key

# Encrypt the disk with the key file (--batch-mode skips the interactive confirmation)
sudo cryptsetup luksFormat /dev/nvme0n1 /etc/cryptkeys/vms.key --batch-mode

# Open the encrypted device
sudo cryptsetup open /dev/nvme0n1 vms_crypt --key-file /etc/cryptkeys/vms.key

4. Recreate LVM on Encrypted Device

# Create physical volume on encrypted mapper
sudo pvcreate /dev/mapper/vms_crypt

# Create volume group
sudo vgcreate vms /dev/mapper/vms_crypt

# Create logical volume for VM
sudo lvcreate -L 200G -n my-vm-disk vms

5. Restore VM Data

# Restore the VM disk from backup
sudo dd if=~/vm-backup/my-vm-disk.img of=/dev/vms/my-vm-disk bs=4M status=progress

# Refresh libvirt storage pool
virsh pool-refresh vms

6. Configure Automatic Unlock

Add the encrypted device to /etc/crypttab for automatic unlock at boot:

# Get the Disk UUID
sudo blkid /dev/nvme0n1

# Add to /etc/crypttab (use the UUID from blkid output)
echo "vms_crypt UUID=<luks-uuid> /etc/cryptkeys/vms.key luks" | sudo tee -a /etc/crypttab

Update initramfs to include the new crypttab configuration:

sudo dracut --force

The Boot Sequence

Understanding when encryption unlocking happens is important:

  1. Initramfs stage: Root filesystem is unlocked (requires passphrase)
  2. Switch root: System mounts the encrypted root partition
  3. Systemd starts: Reads /etc/crypttab from the now-accessible root
  4. Auto-unlock: Systemd reads /etc/cryptkeys/vms.key and unlocks the VM disk
  5. LVM activation: Volume group becomes available
  6. VMs ready: Libvirt can start VMs with autostart enabled

Since the VM disk master key is stored on the already-encrypted root filesystem, it's only accessible after you've unlocked the root partition at boot, providing security at rest while enabling automation.

Backing Up the Encryption Key

The encryption key needs to be backed up securely. I used 1Password CLI for this:

# Upload key to 1Password
sudo cat /etc/cryptkeys/vms.key | op document create \
  --title "$(hostname) - LUKS Key - vms_crypt" \
  --vault <vault-id> \
  --file-name vms.key

I also created a recovery document with: - Disk UUID for device identification - Device mapper name - LVM volume group details - Step-by-step recovery instructions

Recovery Process

If the key is lost or the system needs recovery:

# Download key from 1Password
op document get <item-id> --vault <vault-id> > vms.key

# Restore key file
sudo mkdir -p /etc/cryptkeys
sudo mv vms.key /etc/cryptkeys/vms.key
sudo chmod 000 /etc/cryptkeys/vms.key

# Unlock device
sudo cryptsetup open /dev/nvme0n1 vms_crypt --key-file /etc/cryptkeys/vms.key

# Activate LVM
sudo vgchange -ay vms

# VMs are now accessible

Important Considerations

Device Naming Consistency

Avoid using device names like /dev/nvme0n1 in documentation or backup labels, as these can change between boots depending on device enumeration order. Instead, use:

  • Disk UUID (from blkid)
  • Mapper names (/dev/mapper/vms_crypt)
  • Volume group names (/dev/vms)

Security Model

This approach provides:

  • Encryption at rest: All VM data is encrypted when system is powered off
  • Automatic boot: No second passphrase needed after unlocking root
  • Compromise boundary: If root filesystem is compromised while running, VM disk key is accessible

This is a pragmatic balance for systems where you control physical access but want protection against disk theft.

Verification

After reboot, verify the setup:

# Check encrypted device is auto-unlocked
lsblk | grep vms_crypt

# Verify LVM stack
sudo pvs
sudo vgs
sudo lvs

# Check systemd service
systemctl status systemd-cryptsetup@vms_crypt.service

# View boot dependency chain
systemd-analyze critical-chain systemd-cryptsetup@vms_crypt.service

Conclusion

Encrypting existing VM storage with LUKS provides full-disk encryption while maintaining the flexibility of LVM. The key file approach enables automatic unlocking at boot while keeping security properties intact - the key is protected by the already-encrypted root filesystem.

The migration process is straightforward: backup, rebuild with encryption, restore. The main gotcha is ensuring you have enough temporary space for the backup and properly updating the initramfs to include the new crypttab entry.

For production systems, I'd recommend testing the recovery process at least once to ensure you can actually restore from your backed-up key file. There's nothing worse than having a backup you've never tested.