HomeLab Equipment

Home / HomeLab Equipment

HomeLab Equipment

November 9, 2025 | Homelab, VCF, VMware | No Comments

Overview

Greetings, I was recently asked about what equipment is in my lab and its capabilities. In talking with Aaron about it, he was willing to host this on his website (thanks!), so here is a breakdown of the equipment in my newly acquired lab and its intended uses.

Overall, this environment will be used to gain experience with VMware Virtualization technologies. Specifically VMware Cloud Foundation (VCF) and the Modern Private Cloud (MPC).

Servers

My general logic was to have a structure generally similar to customer environments I might encounter. Enterprise level equipment, but not latest and greatest. So, for me that meant recently out of support but not too out of support servers at a reasonable price. Enter the Dell R640.

Note: using an Intel 61xx series processor is deprecated in ESXi 8 and not supported in ESX 9. This can be overridden.

Having decided that, I then had to decide where to buy them, eBay and Amazon being the immediate likely candidates. eBay won this as they allow sellers to provide drop downs for customization of the specifications to let you have more flexibility in capability and pricing.

All four nodes are also using full NVME drives to allow for vSAN ESA capability, this also allowed the option to test memory tiering in vSphere 8 or VCF 9 if I reserve a drive per host for this purpose.

Note: these drives are not on the Hardware Compatibility List (HCL) so you will need to override the deployment for these to work

Initially I went with 3 servers so that I didn’t need to do nested virtualization. But then went to 4 servers since one of my goals is to do apples-to-apples which for VMware VCF 5.x means 4 hosts. This can of course be overridden to use less hosts. Same with the 3-node NSX Manager requirement. I did not want the complications and additional variables of a nested lab.

Add-In Card

These servers as I bought them came with 2x 1GbE and 2x 10GbE nics. I wanted 25GbE for my backbone, so I also bought one expansion card per server, the Dell Broadcom 57414 Dual 25GbE SFP PCIe cards from eBay.

Networking

So here I wanted to meet the MPC requirement which is 25GbE networking for the backbone. I wanted Cisco, but that gets pricey at this range. Second was Arista as a near-Cisco CLI. In my case that led to a 2-switch setup of a 1GbE switch with 10GbE uplinks by Arista, and then a Dell 25GbE SFP switch for the backbone. Overkill for a home lab, but in line with intended architectures today for those running MPC. Both purchased from eBay.

Arista Switch for 1GB network elements

Dell 25GbE switch for backbone data

Router\Firewall

You need to protect your environment and this made a good excuse to go beyond the standard ISP routers firewall.

In this case I did not want to go too far down the rabbit hole so I went simple, possibly a bit too simple. So, I went with a Firewalla Gold Pro from Firewalla’s website.

This acts as both the lab’s router and firewall behind the ISP’s modem. This does limit to 10GbE for anything transiting the FW.

Overall intent with this environment is to provide flexibility of running workloads from a basic vSphere 7/8 environment, all the way up to a full VCF 9 with VCF Automation deployment and anything in between.

Persistent Node

Now since building out various configurations of the environment is destructive, I have a single persistent ESXi 8 node.

This is for servers\services that I want to survive tear down and rebuild of the lab environment, such as Active Directory\LDAP, Certificate Services, Jump Box, etc.

For this I am using an Minisforum MS-A2 node with 96GB of RAM, a 2Tb NVME SSD and a 4Tb NVME SSD.

Cabling

Now to connect all of the 25GbE links, I opted for FS.com for DAC cables. They provide options to customize the connectors for better compatibility.

For 10GbE, I went to Amazon and got some generic DAC cables.

For the 1GbE connections I also opted for generic Cat 8 ethernet cables from Amazon.

Power

Power matters, and with this being an expensive investment I opted to use one UPS per server, with it being (at least currently) one plug in the battery backed and one plug in the surge protected plugs. This was bought off Amazon.

Console Cable

You will need a console cable to connect, most systems don’t use the old style connectors so I went with a USB interface which all laptops have, make sure you have a USB-A plug or an adapter for it. This was bought off Amazon.

Rack

Lastly, I needed something to store the equipment in. Back to eBay I went.

Misc

Not to interface with it I use a pre-existing laptop and a monitor\mouse\keyboard. Nothing fancy is needed. And remember to be mindful of what type of video cable is needed for your server. In the case of the R640, that is still VGA.

Result

Overall, not the prettiest setup, but here is what the “finished” product looks like.

Conclusion

Hope this helps anyone considering a HomeLab setup. Beyond just the ability to demo\test VCF, this can also run additional capabilities such as photo and movie servers or home automation like Home Assistant. The skies the limit.

, , ,

About Author

Leave a Reply