Skip to main content

Local Simulation vs Bare Metal

This document explains how the Multipass-based local cluster differs from a bare-metal deployment.

Differences at a glance

FeatureLocal (Multipass VM)Bare Metal (Production)
KernelVM kernel, virtualizedDedicated kernel on hardware
Init systemNative systemd (PID 1)Native systemd (PID 1)
File systemVirtual disk imageNative filesystem
NetworkingVirtual NICs, NAT or bridgePhysical NICs
Gateway APITailscale in VMTailscale on host

Kernel and modules

Multipass VMs run their own kernels, so module loading and sysctl tuning happen inside the VM. Bare metal applies the same steps directly on hardware.

Networking model

Multipass uses virtual networking, which is closer to real host networking than containers but still differs from physical NICs, routing, and latency characteristics.

Storage behavior

Local clusters use VM disk images. Bare metal uses the host filesystem and persistent storage systems like Longhorn.

How close is it to real hardware

The Multipass workflow exercises the same kubeadm flow, systemd services, container runtime configuration, kernel modules, and CNI behavior as a physical node. It is a strong approximation for validating playbooks and cluster bootstrap logic.

What it does not test

  • Physical NIC throughput, offload behavior, and switch topology
  • Firmware, BIOS, and power management quirks
  • Disk controller performance and SMART behavior
  • Tailscale exit node performance on real uplinks

Moving from Multipass to bare metal

The migration path keeps the same Ansible roles and GitOps layout and switches only the host inventory and hardware assumptions.

Prepare the bare metal host

Follow Prerequisites and System Preparation on the real machine.

Switch inventory

Update ansible/inventory/hosts.yaml with the bare metal host IPs and user, then run the provisioning playbook again.

Bootstrap the cluster

Follow Kubernetes, Cilium CNI, and ArgoCD and GitOps.