October 5, 2015

571 words 3 mins read

Project “Reaper” – For all your ESXi server on-the-go needs

Dec. 17 airpower summary: Reapers touch enemy forces
Typically when one talks about building virtual infrastructure the first thoughts go to big boxes loaded with as much redundancy as possible, slotted into a nice home at a datacenter. This is a great idea, however I ended up being tasked with something slightly different. I need to build ESXi servers that were small and cost-effective. Most importantly they were going to be shipped and run in remote offices without IT oversight.

Four Reaper units during assembly.
The end requirements were for virtual machine boxes that would be used for monitoring remote offices and deploying lightweight services. They would be shipped internationally, so the server needed to be less than 20” deep, lightweight, and not break the bank. The results were known internally as “Project Reaper” and used the following hardware:

The SuperServer 5018D-MF platform provides a sturdy backbone for this build, while working in a very restricted footprint. However most astute readers will have already noticed something critical, but lacking from this build: there's no redundancy. It was very difficult to find full hardware redundancy in a package that fit our specs, so we opted to ship these servers out in pairs. Each one would be running a duplicate set of VMs to provide redundancy to key services. Should one machine fail in any way, spares at HQ (already prep'd) would be shipped out to replace the failed unit.


All 14 units shortly after completion of build
VMWare experts will also take this chance to point out that

VMotion could eliminate the need to maintain duplicate VMs. However that’s a paid feature which was going to cost more than $1500 per server, more than we paid for the hardware itself. It is, of course, something that can always be added later. Getting the hardware in place is the most difficult and time consuming portion; the software is easy to change after that.


Our “Reaper” units were packaged up in pairs and deployed to five offices in three countries on two continents. There were 14 servers in the initial batch (which I made by hand). Assembling each one only took about 20 minutes at most. The biggest time sink in the entire project was getting enough quantity of certain components, such as the SuperDOM. In fact, if I was going to replicate this hardware again, I would seriously consider ditching the DOM for another SSD. Otherwise it was a solid build that has yet to fail.