Installing Proxmox on a Dell PowerEdge VRTX
The Dell PowerEdge VRTX is a compelling piece of hardware for a homelab cluster. Four M630 blade slots, a shared 25-drive backplane, a shared PCIe fabric, and enough compute density to run a serious private cloud in a 5U chassis. It also has one of the most frustrating storage controller situations I have encountered in years of running Linux on enterprise iron.
This post documents what actually happens when you try to install Proxmox VE on M630 blades connected to the VRTX shared PERC8 storage controller, and how to get past it.
The Hardware Context
The VRTX chassis exposes storage to blades through a shared PERC H710P controller. From the blade's perspective, this looks like a standard SAS controller accessible over the internal PCIe fabric. The driver responsible for talking to it is megaraid_sas.
The problem is that newer kernels — including the kernel shipped with current Proxmox VE ISO images — have a version of megaraid_sas that has a known interaction problem with the PERC8 firmware revisions Dell shipped on VRTX units. The result is not a clean error. It is a driver that loads, enumerates the controller, and then hangs during I/O operations. The installer sees disks but cannot write to them reliably. Sometimes it gets further and the system panics on first boot.
Dell has not updated VRTX firmware in years. The chassis is officially end of life. The kernel keeps moving. The intersection is your problem.
What the Failure Looks Like
If you boot the Proxmox installer and the storage controller is not behaving, you will see one of two things. Either the disk selection screen shows your RAID volumes but installation fails partway through with an I/O error, or the installer completes, the system reboots, and you get a kernel panic immediately after the initramfs hands off to the main system.
The kernel log will contain something along these lines:
megaraid_sas: FW in FAULT state, Fault code: 0x...
megaraid_sas: waiting for hw_init to complete
megaraid_sas: failed to reset adapter
The Actual Fix
There are two approaches that work. The first is a kernel parameter. The second is a module parameter passed to megaraid_sas at load time.
Kernel parameter approach:
At the Proxmox installer GRUB prompt, press e to edit the boot entry and append the following to the linux line:
pci=nomsi
This disables MSI interrupts for all PCI devices. It is blunt but effective. The PERC8 on VRTX has known MSI handling issues with newer Linux interrupt handling. Disabling MSI forces the controller to use legacy INTx interrupts, which are slower but functional.
Module parameter approach:
The more surgical fix is to pass msix_disable=1 to the megaraid_sas module. You can do this at the installer boot prompt the same way, appending:
megaraid_sas.msix_disable=1
After installation, make this permanent by adding the module parameter to /etc/modprobe.d/megaraid_sas.conf:
options megaraid_sas msix_disable=1
Then regenerate the initramfs:
update-initramfs -u -k all
Without regenerating the initramfs, the parameter will not take effect at the early boot stage when the controller is first initialised, and you will still panic on reboot.
PXE Installation Adds Another Layer
If you are trying to PXE-boot the Proxmox installer rather than using a USB stick — which is the right approach when you have multiple blades and do not want to physically touch each one — you need to pass the module parameter through your PXE boot configuration.
In a iPXE or PXELINUX setup, the append line needs to carry the parameter through to the kernel:
append initrd=proxmox-ve.img megaraid_sas.msix_disable=1 ...
The blade will boot the installer over the network with the correct driver behaviour from the start.
VRTX-Specific BIOS Settings
Before any of this matters, verify two things in the blade BIOS and the chassis CMC.
First, the shared storage must be assigned to the blade in the chassis management console. Go to the CMC web interface, navigate to Storage and confirm that the RAID volumes you created on the shared PERC are allocated to the blade slot you are installing on. If they are not assigned, the blade will not see them at all regardless of driver state.
Second, in the blade BIOS, ensure the boot order places the local storage device above PXE if you are doing a local install. The VRTX fabric can present multiple boot options that are not obvious from the BIOS screen labels.
After Installation
Once Proxmox is running with the module parameter in place, behaviour is stable. The PERC8 on VRTX performs adequately for a homelab cluster. You are not going to saturate the shared backplane with four M630 blades running home workloads.
he shared storage architecture does mean that all blades share I/O bandwidth through the same controller. Plan your Proxmox storage accordingly. For anything latency-sensitive, use local NVMe in the blade if your M630 configuration includes the NVMe backplane option. Use the shared VRTX storage for bulk VM disks, ISOs, and backups.
The cluster itself runs fine once the driver issue is out of the way. Three M630 blades in a Proxmox cluster with shared storage and a dedicated management VLAN is a legitimate private cloud for a home or small lab budget, assuming you got the hardware for the price enterprise gear goes for when it falls off lease.