ESXi on a low cost, low power, small footprint, decent memory Haswell i5

Excuse me, do these effectively hide my thunder?

2013-11-29    »   guide, software, vmware

I needed a modest virtualisation lab for running a fair number of virtual machines for work. These are constrained to work only on VMware’s ESX virtualisation platform, regrettably, or I’d happily bimble along with virt-manager and kvm.

In addition to a stack of these for work, I’ll relocate all my server functionality into the box, which means I’m looking at an almost-always-on requirement, so power consumption becomes particularly important.

Because my work VM’s have relatively low CPU – compared to memory – requirements, I need something that goes past the common limitation (today) of smaller/home ESXi configurations of 16GB. I’m happy with an arbitrary limit of 32GB, however, as I find the CPU:memory ratio starts to lean the other way on a single CPU machine with 32GB, and I’d then be happier to just run a second machine. This would also satisfy my interest in playing with physical network gear in between, rather than just the virtualised stuff you can do with ESXi.

The choices

Speaking to colleagues and browsing the Interwebs suggests myriad permutations of home-grown ESXi builds.

AMD seems to be the more popular choice simply because they tend to offer more cores, or at least a better core:cost ratio. There’s some low-draw power options in the AMD camp, but a couple of posts ( ESXi Home Lab with Intel Haswell CPU and Haswell low power whitebox for ESXi and Hyper-V ) extolling the power (NPI) of Haswell were influential in my decision.

I looked at HP Microservers, which can be had for small numbers of dollars, even here in Australia. However the big limitation with those are the memory constraints - the previous model was 8GB and could be wrangled to hold 16GB. The new models max out at 16GB, though rumours suggest some people could crank them up to 32GB. But by the time you spec the box out properly, it had become a high cost option, with the usual array of proprietary hardware, and tiny case challenges.

At the back of my mind, I am also trying to future-proof my network. There’s a tiny chance that I may build another similar machine (smaller memory, bigger case) to run as my NAS, and as above, perhaps a second ESXi box. Either way I would want to try to match the hardware specs on all machines, so having plenty of on-board SATA and other I/O ports is an advantage, and the cost for these features on a mobo are quite low compared to the cost of memory.

The components

So here’s what I ended up using.

Prices are in AUD, current as at 2013-11.

$109   Case - Silverstone SG02W-F-USB3 (white) including freight ($24)
$109   PSU - Silverstone ST60F-P (modular, 80+, 600W)
$102   Motherboard - Gigabyte GA-B85M-D3H LGA 1150
$235   CPU - Intel i5 4570S
$327   RAM - G.SKILL RipjawsX 32GB (4x 8GB) DDR3 1600MHz 
 $39   Intel Pro 1000
$921   Total

2014-10 Addendum (see details below) – I subsequently added a 4-port SATA controller (HP 433906-001 aka a LSI SAS3041E) so I could run a pair of low-power 2TB Seagate ‘green’ hard disks in RAID1. This added A$70 for the controller plus whatever the drives are worth (I had a couple lying around).

The rationale

A truly tiny case was out of the question - too limiting on motherboards, CPU + coolers, option to stick in cards (multiple Intel NIC’s are on the horizon), option to put in some local storage (a NAS is some months away), and no great personal motivation to keep the box really small. Relatively small is good enough. Front-accessible USB3, couple of internal 3.5" bays, 5.25" bay for DVD, are all handy. Actually, a big criteria was that the case would fit in our cube-based furniture. :)

I looked at the Seasonic 600W Bronze (also an 80+) but availability was a bit tight. Again I wanted something that could be re-purposed later, that could drop down to a trickle power draw when needed, and, of course, was quiet as can be. Modular is important when working with small cases, especially absent any powered PCIE cards, and with only a couple of hard drives.

![Modular medusa][img-modular-medusa]
A smaller PSU wouldn’t have as many cables, natch, but non-modular PSU’s are a nightmare to work with in a small case.


Various blogs mentioned above indicated that the Z87 chipset was ESXi compatible, but perhaps overkill, which is what led me to look at the B85M range. The D3H (watch out - there’s a few permutations of those letters, and some of them are 2-DDR sockets only) is compatible, cheaper, and with plenty sufficient capabilities for an ESXi machine as well as being a good candidate for the subsequent NAS box.

The Intel i5 4570S was selected because of that trailing S there – it indicates that it’s a low-power version. The 4570S is 65W TDP, compared to the vanilla 4570 which is 84W. This comes back to the requirement of very low power draw, given the machine is going to be almost always on, and almost always close to idle. Obviously it was also chosen based on being relatively speedy (2.9 - 3.6GHz), having a decent number of cores / threads, and supporting 32GB RAM.

RAM was chosen partly on the basis that 1600MHz is good enough, and after that on price. I’m unwilling to pay a lot more for tiny increases in RAM timings, basically. I’d originally set out with an expectation of building an ECC-based box, so while a Haswell build turns out to be a cheaper path, RAM is still the single most expensive component, making up more than a third of the total cost.

There’s various patches that you can squeeze into ESXi to get it talking to on-board (typically Realtek) NICs, but it’s just better to start with something a bit more robust, especially if you’re looking forward to a NAS attached via a decent switch, and specifically the benefits of jumbo frames. Intel Pro GbE NIC’s are pretty much the default for home build ESXi builds, though there’s some confusion to be enjoyed working out the difference between the various CT, GT, MT and other variants. I suspect later I’ll look at a dual-port card, though these are quite the jump in price.

For storage, my long term plan is to build or buy an iSCSI NAS. I’ll likely build another box just like this one, perhaps with 4x 4TB drives, and maybe running the NAS within an ESXi virtual machines (with pass-through, of course). Two birds, and all that. Very much up in the air. In any case, right now I have two drives in this machine – a 120GB 2.5" old laptop drive (regrettable performance but lower power usage) and a 2TB WD Green drive (ditto) for the interim. I didn’t count either in the pricing, and I assume that anyone building their own ESXi box has a couple of old drives sitting around in any case (again, NPI).

The build

Anyone who has ever built their own PC knows the joy of a decent case, and preferably a decent, large, thoughtfully designed case.

Wobbly small cases are a nightmare.

There’s plenty of reviews of the Silverstone SG02 case. It’s not good for silencing noisy components, as there’s just too much ventilation on the thing. But as always, buying silent components in the first place renders this concern moot.

Having said that, the SG02 is a bit different to most cases I’ve worked on. My current desktop case, the Fractal Define R4, is a beautifully designed large case, but even it starts to get annoying to work with when running lots of SATA power & data cables. In any case, the instruction manual for the SG02 is succinct and useful, and while it was slightly off-putting that they didn’t have mounts for the mobo risers at the far end (where you’re pushing in RAM and SATA cables), the build process went smoothly.

I’d suggest, if you don’t have your own preferences, and you’ve got a similar array of hardware, the following:

  1. pull the case apart as per the instructions - lid, then fan rail
  2. mount the hard drive(s) in the 3.5" bays - unscrew, slide out, attach drives, slide back in, screw
  3. insert the CPU + cooler onto the mobo - do this on a big sheet of anti-static foam
  4. insert the RAM onto the mobo - ditto
  5. screw in the risers in the case to match the placements on the mobo
  6. push on the case backing plate (for the mobo)
  7. mount the mobo into the case
  8. wire up the USB, Audio, power, reset case wires, and try to keep them tidy
  9. wire up the SATA data cables
  10. work out which cables you actually need on your power supply, then drop it into place
  11. run power to the mobo (you’ll have to lift the PSU up to do this probably)
  12. mount the cdrom(s) in the 5.25" bay
  13. run power to the SATA drives
  14. install the NIC
  15. fan rail and lid back on
![Case with just the motherboard][img-case-just-mobo]
Motherboard is installed, four sticks of RAM in place, and in the bottom right corner you can see the two hard drives (2.5" on top, as with the fan placement at the far side of the case, provided better airflow around the 3.5" if it was in the lower of the two 3.5" slots).


![Case with almost everything][img-case-almost-everything]
And in this picture you can see the box almost completed. The SATA data cable to the DVD was a bit of a fiddle - the PSU had to be lifted slightly, briefly, to get access. And the NIC wasn’t installed yet, either, of course.


Take great care that none of the cables are in any risk of blocking the CPU fan.

Job done.

The configuration (or rather a vague promise)

Sometime soon I’ll run through the ESXi configuration in more depth, but a couple of things I noted with the BIOS.

Out of the box the RAM was set to 1333MHz. Easy to bump it back up to 1600MHz, of course, but a bit strange. (XMP - Extreme Memory Profiler - Profile 1 - then System Memory Multiplier to 16)

Execute Disable / No Execute – must be DISABLED for ESXi

The ESXi installation was otherwise quite straightforward - it picked up both drives, happily installing to the 120GB 2.5" drive. I intend to keep my base (installable) images on this drive, and install VM’s to the larger 2TB drive, which will make it easier to later remove it when I relocate images across to a NAS.

2014-01 Addendum – SATA RAID controller - I’m likely going to maintain a handful of small but essential hosts running on this box, so I have looked around for a cheap ESX-compatible RAID controller (ESXi does NOT support software RAID, and the Gigabyte GA-B85M-D3H LGA 1150 motherboard certainly doesn’t have any RAID support built-in (even if it did, most consumer mobos only provide software RAID, which is almost always crap). In any case, for A$70 I’ve picked up a second hand HP 4-port PCI-E x4 SAS / SATA RAID Controller. HP call it a model 433906-001, but they only seem to sell it as part of a system. It’s actually a re-badged LSI SAS3041E controller, and LSI are generally a safe bet with VMware ESX systems. This model, certainly, has plenty of electronic nods to its suitability for small ESXi boxes. With the addition of the second drive I’ll also put in a couple of 80mm fans – there’s space for two of those at the top of the box, on the removable rail that runs the length from front to back.

![In its new home][img-case-in-situ]
And here’s the box in its new home. Rinky dinky cube furniture happens to be 335mm across. No one knows why. The DVD drive will be removed – it’s not really needed after the installation, plus aesthetics, plus tiny power draw on idle.