Proxmox VE Hardware Requirements Guide
Proxmox VE requires a 64-bit x86 CPU with hardware virtualization support, at least 2 GB of RAM for the Proxmox OS and services, and one network interface. That is enough to boot the installer and run a test VM.
It is not enough for production.
How much more depends on several factors. Your choice of storage backend (ZFS, Ceph, or hardware RAID) affects both RAM and controller requirements. So does the number of virtual machines you plan to run. Clustering for high availability and GPU or PCI passthrough each adds its own requirements.
This guide breaks down the hardware requirements for each scenario. We cover the official minimums and walk through production recommendations for CPU, RAM, storage, and networking.
#Minimum hardware requirements for Proxmox
Proxmox VE can run on modest hardware for testing and evaluation. The official minimum specifications are as follows:
| Component | Minimum requirement |
|---|---|
| CPU | 64-bit Intel or AMD processor with Intel VT-x or AMD-V support |
| RAM | 2 GB for the OS and Proxmox services, plus additional memory for each guest |
| Storage | One hard drive or SSD (no minimum size specified) |
| Network | One Ethernet NIC |
| Browser | A current version of Firefox, Chrome, Edge, or Safari for the web interface |
The CPU must support hardware virtualization at the BIOS/UEFI level. Without Intel VT-x or AMD-V enabled, KVM will not function.
A separate feature, Intel VT-d or AMD-Vi, is required only if you plan to pass physical devices (such as GPUs or network cards) directly to virtual machines. It is not needed for standard virtualization.
These specifications are not intended for production use. You can install the hypervisor, create a few lightweight VMs or containers, and explore the web interface.
The minimums also leave out several things that production systems require. There is no storage redundancy, no spare RAM for ZFS or Ceph overhead, no network failover, and no capacity to absorb a node failure in a cluster.
If you would rather skip the hardware planning, Cherry Servers offers preconfigured bare metal servers ready for Proxmox out of the box.
Set up your virtualization environment in minutes
Bare metal for Proxmox, KVM, or VMware. High performance, low overhead, no lock-in. Run VMs your way with hourly billing, 24/7 support, and crypto payments.
#Recommended hardware requirements for Proxmox
Production Proxmox hardware is not just about adding more RAM or CPU cores. The wrong storage controller can make ZFS unusable. Consumer SSDs without power-loss protection can lose data during a power failure. A single network interface becomes a single point of failure the moment you add a second node.
Getting these decisions wrong costs more time than undersizing, because they often require replacing hardware rather than upgrading it.
#CPU requirements
Your CPU choice affects VM density, overcommit headroom, and whether features like PCI passthrough and ECC memory are available.
Server-grade processors like AMD EPYC and Intel Xeon are built for sustained, multi-VM workloads. They offer higher core counts, more PCIe lanes for storage and network expansion, and support for large amounts of ECC memory.
Consumer CPUs like AMD Ryzen and Intel Core work well for smaller deployments, but they support fewer memory channels and offer limited PCIe expansion.
When sizing CPU cores, keep two things in mind. First, the CPU can be safely overcommitted. Most VMs do not use all their allocated cores at once, so assigning more total vCPUs than the host has physical cores is normal.
A practical starting point is 1-2 physical cores for every 4-8 vCPUs, depending on how CPU-intensive your workloads are.
Second, reserve roughly 20% of the host's CPU capacity for hypervisor overhead. This covers I/O emulation, network processing, and Proxmox services.
Between the two main server CPU families, the choice comes down to density versus per-core speed:
- AMD EPYC offers more cores per socket, more PCIe lanes (up to 128), and higher memory bandwidth. It is the stronger choice for VM-dense environments.
- Intel Xeon offers competitive single-thread performance and Intel-specific accelerators such as AMX and QAT. It suits workloads that depend on per-core speed.
One important caution for smaller deployments: avoid Intel 13th and 14th-generation desktop CPUs (Core i7/i9 K-series) for always-on Proxmox servers.
Intel confirmed a hardware defect in these chips that causes permanent degradation under sustained load. Microcode updates slow the damage, but cannot reverse it. Intel 12th gen and Xeon E-series processors remain safe alternatives.
#RAM requirements
RAM is the hardest resource to get wrong in Proxmox. Unlike a CPU, memory cannot be safely overcommitted. When a host runs out of physical RAM, the Linux OOM killer terminates processes to free memory, and those processes are often running VMs. There is no warning.
The total RAM a Proxmox host needs follows a simple formula:
- Proxmox base: 2-4 GB for the OS and services.
- Guest allocations: the sum of RAM assigned to all VMs and containers.
- ZFS ARC overhead: approximately 1 GB per TB of managed storage, if using ZFS.
- Ceph daemon overhead: approximately 4 GB per OSD, plus 2 GB each for MON and MGR daemons, if running Ceph.
- HA failover buffer: enough free RAM on surviving nodes to absorb a failed node's VMs, if clustering.
ZFS needs special attention when sizing RAM. Its read cache, called the Adaptive Replacement Cache (ARC), is aggressive with memory by design. On older Proxmox installations, ARC can claim up to 50% of host RAM. Newer versions automatically cap it lower, but the safe move is to set an explicit limit yourself.
If you do not, ARC and your VMs end up fighting for the same memory. When both lose, disk I/O performance drops sharply.
Ceph adds significant memory overhead on top of this. Each OSD daemon uses about 4 GB by default, and that number climbs during recovery or rebalancing. If you are running Ceph and VMs on the same nodes, RAM usage can add up quickly.
Proxmox includes two memory optimization features:
- KSM (Kernel Same-page Merging) deduplicates identical memory pages across VMs.
- Memory ballooning lets VMs return unused memory to the host.
Both have limits. KSM increases CPU load and creates security concerns in multi-tenant setups. Ballooning breaks with PCI passthrough and conflicts with ZFS ARC.
Neither replaces proper memory sizing.
For production, use ECC (Error-Correcting Code) memory. A single undetected bit flip in RAM can silently corrupt data on disk. ECC catches and corrects these errors before they cause damage.
For homelabs where data loss is not catastrophic, non-ECC works fine.
DDR generation depends on your CPU platform, not on Proxmox. AMD’s current platforms require DDR5. Intel supports both DDR4 and DDR5, depending on the chipset and motherboard. If you are buying new hardware, DDR5 is the default path.
#Storage requirements
Storage decisions are the hardest to reverse. Using the wrong controller makes ZFS and Ceph unusable, and switching drive types after deployment usually requires rebuilding from scratch.
OS storage
For the Proxmox operating system itself, a mirrored pair of SSDs is the standard production setup. The Proxmox installer supports ZFS mirroring out of the box, so no extra configuration is required. Each drive only needs 128-256 GB. That is enough for the Proxmox installation, container templates, ISOs, and logs.
Use enterprise-grade SSDs with power-loss protection (PLP). PLP ensures that data held in the drive's volatile write cache gets flushed to NAND during a power failure.
Without it, a sudden outage can corrupt data that the drive reported as written. The performance gap is significant as well. Enterprise SSDs with PLP can deliver tens of times more sync write IOPS than consumer drives without it.
VM and container storage
For VM and container storage, the performance gap between drive types is wide:
- NVMe SSDs: 500K-1.5M random 4K IOPS. Suited for databases, high-density VM hosting, and Ceph OSDs.
- SATA SSDs: 80K-100K read IOPS. Good for general-purpose VMs and containers.
- HDDs: 75-400 random IOPS. Only suitable for backups and cold storage. Never use these as primary VM storage in production.
Storage controller compatibility
ZFS and Ceph both need direct access to individual disks. A hardware RAID controller hides disks behind virtual volumes, which breaks this requirement. ZFS can no longer checksum individual blocks or self-heal from redundant copies.
Ceph has the same problem. It cannot manage its own replication when a RAID controller sits between it and the physical drives.
The solution is a Host Bus Adapter (HBA) in IT (Initiator Target) mode, which passes raw disks straight to the operating system without any RAID abstraction. The most common choices are:
- LSI/Broadcom SAS 9300 series: PCIe 3.0, widely available, often found as Dell H310 or H330 rebrands for under $50 used.
- LSI/Broadcom SAS 9400 series: PCIe 3.1, supports SAS, SATA, and NVMe (U.2/U.3) on the same controller.
If you plan to use hardware RAID instead of ZFS or Ceph, that path still works. Proxmox supports ext4 and XFS on hardware RAID volumes. Just make sure the controller has a battery-backed write cache (BBU) for data safety during power events.
ZFS and Ceph disk layouts
ZFS mirror vdevs deliver the best IOPS for VM workloads. Reads scale across all disks and resilvers complete fast. RAIDZ2 offers better capacity efficiency but delivers roughly single-disk write IOPS regardless of vdev width.
SLOG devices (a small, high-endurance NVMe or Intel Optane drive) improve synchronous write performance for database-heavy workloads.
Ceph works differently. Use dedicated disks per OSD with uniform models and sizes within a pool. The slowest disk in the cluster defines performance. When using SATA SSDs or HDDs as OSD data disks, placing the BlueStore WAL/DB on a separate NVMe accelerates metadata operations.
#Network requirements
A single NIC is enough to run Proxmox, but it is also a single point of failure. Production systems require at least 2 NICs, and clustered setups benefit from separating different traffic types onto dedicated interfaces or VLANs.
A Proxmox cluster handles four distinct traffic types:
- Management: web UI, SSH, and API access. 1 GbE is sufficient.
- VM traffic: network I/O from your virtual machines and containers.
- Storage: Ceph replication, NFS/iSCSI mounts, or ZFS replication between nodes.
- Corosync: cluster heartbeat messages between nodes.
If storage or VM traffic saturates a shared link, those heartbeat packets get delayed. The cluster reads that delay as a node failure, which can cause quorum loss or fencing depending on the HA configuration. A dedicated Corosync network, even just a direct cable between nodes, prevents this.
Speed requirements tie directly to your storage backend. Management traffic runs fine on 1 GbE. Ceph with SATA SSDs needs 10 GbE at a minimum. A single NVMe drive can saturate a 10 GbE link on its own, so NVMe-backed Ceph clusters should use 25 GbE or faster.
For NICs, stick with Intel (I350, X550, X710, E810) or Mellanox/NVIDIA ConnectX adapters. Both have mature, well-maintained drivers in the Linux kernel. Consumer Realtek NICs work for home labs but tend to drop packets or stall under sustained production loads.
Bonding gives you NIC failover. Use 802.3ad (LACP) if your switch supports it, or active-backup if it does not.
One exception: do not run Corosync over bonded interfaces. Corosync natively supports up to 8 redundant links and handles its own failover.
Rent Dedicated Servers
Deploy custom or pre-built dedicated bare metal. Get full root access, AMD EPYC and Ryzen CPUs, and 24/7 technical support from humans, not bots.
#Proxmox hardware requirements by use case
The right hardware depends on what you are building. The six scenarios below cover the most common Proxmox deployment types, from a home lab mini PC to a multi-node Ceph cluster.
#Home lab or test server
This tier covers hobbyists, learners, and anyone evaluating Proxmox before deploying it in production. Data loss here is inconvenient, not catastrophic.
Consumer hardware works fine at this scale. A mini PC, a retired desktop, or an entry-level rack server can all handle the job. Mini PCs with dual NICs and an NVMe slot are a popular choice in the homelab community.
Sizing targets:
- CPU: 2-4 cores
- RAM: 16-32 GB
- Storage: one 256 GB-1 TB NVMe SSD
- Network: single 1 GbE NIC
This is enough to run 2-5 lightweight VMs or 5-10 containers. ECC memory and enterprise SSDs with PLP are not required at this tier. Regular backups are still important, though, since you have no storage redundancy on a single disk.
One thing to watch: if you plan to use ZFS, budget toward the higher end of the RAM range. 16 GB gets tight once ZFS ARC starts claiming its share alongside even a few VMs.
#Small business single-node server
This is for small teams running production workloads on a single dedicated server. No clustering, no automatic failover. Typically, 5-15 VMs handling things like web applications, databases, file sharing, or internal tools.
Sizing targets:
- CPU: 8-16 cores
- RAM: 32-64 GB ECC (more if using ZFS)
- Storage: 2x 500 GB-2 TB NVMe SSDs in a ZFS mirror, enterprise-grade with PLP
- Network: redundant 1 GbE NICs
Here is how the RAM math works in practice. Start with 4 GB for Proxmox itself. Add 10 VMs at 4 GB each, that is 40 GB. If you are running ZFS on 10 TB of storage, add another 10 GB for ARC. The total comes to roughly 54 GB. Round up to 64 GB.
Without clustering, a hardware failure means downtime. Your safety net is backups and, if you have a second machine available, scheduled replication. Test your restores regularly.
If sourcing and racking your own hardware is not practical, a bare metal dedicated server with Proxmox support is a viable alternative. Look for providers that offer enterprise SSDs, ECC memory, and remote management (IPMI/iKVM) so you can reinstall or troubleshoot without physical access.
#ZFS-based Proxmox host
If you have already decided on ZFS, the main sizing challenge is memory. ZFS performs best when it has enough RAM for both its read cache and your VMs. If ARC is starved, read performance drops. If VMs are starved, the OOM killer steps in.
The official formula is 2 GB base plus 1 GB per TB of managed storage. But that is a floor. In practice, allocating 25-50% of total host RAM to ARC gives better results for VM-heavy workloads. The exact split depends on whether you prioritize cache hit rates or VM density. Set zfs_arc_max explicitly rather than relying on defaults.
A quick layout comparison for VM storage:
- Two 2 TB mirror vdevs: 2 TB usable, strong IOPS, fast resilver times. Reads scale across all disks, making this the preferred layout for VM workloads.
- Four-disk RAIDZ2: same 2 TB usable, but write IOPS stay close to single-disk performance. Better suited for bulk storage where capacity matters more than speed.
For synchronous write workloads like databases or NFS, a SLOG device improves performance significantly. This is a small, high-endurance NVMe or Intel Optane drive with PLP, typically 16-64 GB. If your workloads are mostly asynchronous, you do not need one.
Your storage controller must be an HBA in IT mode. Hardware RAID controllers are incompatible with ZFS.
ECC memory is strongly recommended at this tier, since ZFS cannot detect or correct errors that happen in RAM before data is written to disk.
#Proxmox cluster node
Proxmox clustering requires a minimum of three nodes. This is because Proxmox uses Corosync for consensus, and a majority of nodes must agree for the cluster to function. If one node goes down in a three-node cluster, the remaining two still form a majority.
A two-node cluster is possible with a QDevice, a lightweight external voter that runs on something as small as a Raspberry Pi. But three nodes remain the standard recommendation.
The key sizing constraint for clustered nodes is failover headroom. Each node should run at no more than 65-70% RAM utilization. That way, if one node fails, the surviving nodes have enough free memory to take over its VMs.
For example, 30 VMs across 3 nodes, each VM averaging 4 GB of RAM. That is 10 VMs per node, which comes to 40 GB for guests plus 4 GB for Proxmox overhead. At a 70% utilization target, each node needs roughly 64 GB total to leave enough room for failover.
Shared storage is required for live migration and high availability. Your options include Ceph (covered in the next section), NFS, iSCSI, or ZFS replication between nodes.
Corosync needs a dedicated link separate from storage and VM traffic.
Keep hardware identical or near-identical across nodes. Mismatched CPU vendors between nodes can cause live migration failures.
#Hyper-converged Ceph node
Running Ceph and VMs on the same nodes saves hardware but tightens every resource constraint. CPU, RAM, storage, and networking all need to be sized for both storage and compute duties.
Three nodes are the minimum for Ceph's default triple replication. But with only three nodes, a single failure leaves the cluster degraded. There is no tolerance for a second failure before recovery completes. Five or more nodes are the safer choice for production.
RAM is the tightest constraint. Here is how it adds up on a single node running 6 OSDs and 10 VMs:
| Component | RAM required |
|---|---|
| Proxmox base | 2 GB |
| Ceph MON daemon | 2 GB |
| Ceph MGR daemon | 2 GB |
| 6 OSDs at 4 GB each | 24 GB |
| 10 VMs at 4 GB each | 40 GB |
| Total | 70 GB |
Round that to 80-96 GB per node at minimum. Nodes with more OSDs or larger VMs will need 128-256 GB.
For disks, plan at least 4 OSDs per node using uniform models and sizes. Keep Ceph OSD disks separate from the Proxmox boot drives. If your OSD data disks are SATA SSDs or HDDs, place the BlueStore WAL/DB on a dedicated NVMe. This accelerates metadata operations and write acknowledgments.
Hyper-converged networking has the heaviest requirements of any deployment type. Ceph requires at least 10 GbE, and NVMe-backed OSDs should use 25 GbE.
Separate Ceph public and cluster traffic onto different physical networks, and keep management and Corosync on their own links. This typically means four NICs or dual-port 10/25 GbE cards per node.
For CPU, Ceph OSD daemons benefit from higher base clock speeds rather than just core count. Budget 1-2 additional cores per OSD on top of your VM allocations.
#GPU or PCI passthrough server
GPU and PCI passthrough have the most specific hardware requirements of any Proxmox deployment type. The wrong motherboard, boot mode, or CPU feature set can make passthrough impossible.
The CPU must support VT-d (Intel) or AMD-Vi, and the feature must be enabled in BIOS/UEFI. These are separate from the base VT-x/AMD-V flags required for standard virtualization.
Each device you want to pass through must sit in its own IOMMU group. Server platforms (Xeon, EPYC) generally have clean group isolation. On the consumer side, AMD X570 and X670 boards offer the best separation, while Intel consumer chipsets vary widely. Check IOMMU groups before buying a motherboard.
The host must boot in UEFI mode with CSM (Compatibility Support Module) disabled. Legacy BIOS initializes GPUs during POST, which prevents them from being re-initialized for passthrough.
If you pass through your only discrete GPU, the host loses all display output. A CPU with integrated graphics avoids this problem by keeping the host accessible through the web UI and local console.
All memory allocated to a passthrough VM is pinned and cannot be shared or reclaimed. Factor this into your total host RAM.
Each passed-through device consumes PCIe lanes. AMD EPYC offers up to 128 lanes. Consumer platforms have far fewer and run out quickly with more than one GPU.
Both NVIDIA and AMD consumer GPUs work with Proxmox passthrough. AMD GPUs generally pass through with less friction. One exception: the RX 5000 series has a known reset bug that prevents the GPU from reinitializing after a VM shuts down. RX 6000 and 7000 series cards are unaffected.
#Common mistakes to avoid when sizing a Proxmox server
Some hardware decisions are easy to get wrong and expensive to fix. The following often cause problems in production Proxmox environments.
-
Consumer SSDs without PLP: A power failure can destroy data in the drive's volatile write cache. With ZFS or Ceph, this often means rebuilding the entire pool or OSD from backup.
-
ZFS or Ceph on a hardware RAID controller: Both require direct disk access to function correctly. You often do not discover the problem until the data is already on the drives. Fixing it means replacing the controller and rebuilding storage from scratch.
-
Memory overcommit: CPU overcommit is safe. Memory overcommit is not. When physical RAM runs out, the Linux OOM killer terminates VMs without warning.
-
No failover capacity in a cluster: Running nodes at 90%+ RAM utilization defeats the purpose of high availability. When a node fails, the surviving nodes need enough free memory to take over its VMs. If they do not have it, HA cannot do its job.
-
Single NIC in a clustered setup: Corosync traffic shares bandwidth with storage and VM traffic. When the link gets congested, heartbeat packets get delayed. The cluster interprets that as a node failure, potentially triggering fencing or quorum loss.
#Conclusion
Across all deployment types, three hardware decisions matter most: enterprise SSDs with power-loss protection for data safety, ECC RAM for long-term reliability (especially with ZFS), and an HBA in IT mode if you are running ZFS or Ceph. Getting these right from the start is far cheaper than replacing hardware after data is already on the drives.
If you would rather skip the hardware sourcing and go straight to deploying, Cherry Servers offers bare metal servers for virtualization with AMD EPYC and Ryzen CPUs, enterprise-grade storage, and Proxmox installation support. All plans include hourly billing for testing configurations, 24/7 technical support from real engineers, and up to 100 TB of free monthly egress traffic.
FAQs
How much RAM do I need to run 10 virtual machines on Proxmox?
It depends on what each VM requires. As a rough example: 4 GB for Proxmox itself, 10 VMs at 4 GB each, and 10 GB for ZFS ARC on 10 TB of storage. That comes to about 54 GB. Round up to 64 GB for headroom.
Can Proxmox run on consumer hardware?
Yes. Consumer CPUs like AMD Ryzen and Intel Core support the virtualization features Proxmox needs. Mini PCs, desktops, and tower servers all work for home labs and small deployments. The main limitations are fewer PCIe lanes, limited ECC support on some boards, and less expansion room.
Does Proxmox require ECC RAM?
No. Proxmox runs fine on non-ECC memory. But for production systems, ECC is strongly recommended. It detects and corrects single-bit memory errors that can silently corrupt data. This matters most with ZFS, which checksums data on disk but cannot catch errors that occur in RAM before a write.
Get 100% dedicated resources for high-performance workloads.