The 4400 (watts)

About six moths ago (approx February of '18) I decided to pursue a project to make it so I could run Linux or windows programs on the same system without issues... or at least as smoothly as possible. I had quite a few ways I knew I could go about doing this: Wine, VMs with GPU passthough, Bash-on-Windows, Cygwin, ... . However, I really didn't know which way I wanted to go about it. I eventually came to the conclusion that the majority of the Linux work I do can be run from a cli and what can't be I can probably get to run though Bash-On-Windows/ The Windows Subsystem for Linux and an xserver. To make a long story short I was wrong, and that solution was royally awful and I hated it. But I learned a whole hell of a lot in the process.

I knew I needed a Linux server to run alongside my main windows system, and I was looking to actually get a real server because I wanted something I could learn enterprise level skills on as well comfortably host services with minimal down time, so I was on the hunt for a server. I shopped around a bit, and I just happened to stumble into what looked to me- someone who normally buys consumer gear- to be incredible: a 64 thread (4x 8C/16T CPU), 64Gb ECC DDR3 System for about \\(400, and then 2 10Gbe cards and the cable to connect the systems for another \\)50. To me this sounded like the most glorious of overkill and like it could be a whole lot of fun, so after no where near enough consideration I pulled the trigger. I did leave one interesting spec unsaid though: it has 4, 1100W Power supplies. Now, I'm not dumb. I knew 2 of those were redundant, bringing us down to 2200W max, and then given the system was only running with 64Gb of RAM and not powering any insane pci-e gear in the back (except for the 10gbe network card, but meh), and then doing some napkin math I knew there was basically no way it would be drawing more than about 600W, so- not to bad right? I mean, I have a 750W PSU in my desktop, 600W can't be a big deal?

Yeah. This is why getting a server when you still live with your parents can be a bad idea. I didn't pay the power bill, and I had no idea of the 'true cost' of power. Screw the responsibility, with great power comes great bills.

It was this one spec that got me, so not even two months into owning this behemoth I was already looking into selling it. Thankfully I did, as well as the 2 10Gbe cards, and at a nice profit at that, but that's not the point of this post. The point is what I did learn. Well, if it wasn't clear already, I learned that power costs money (also that using that much power legitimately heats up a room), but I learned a lot of interesting things about 10gbe networking, running a server, and the benefits and costs of buying of used enterprise gear as well, so here goes.

Benefits/costs of buying used enterprise gear

The first thing of note here is my god are the prices ripe. I got the 10Gbe cards for what looks to be a steal, the deal on that server - even with that much RAM and CPU horsepower- while good wasn't necessarily the deal of a lifetime. Server gear is a /lot/ less expensive than you'd think, and works really well. The main gotcha's appear to be in noise, power consumption, and configuration

The noise point is probably the largest problem. With the R910, the noise was actually totally tolerable to be around; however, the power vault I bought to go with it... not so much. 4, 80mm(?) jet engines in the back drove me nuts. Thankfully I solved that problem by replacing the fans with 3, 140mm fans in the front, but that's a story for another post. I've also owned an R1950 Server, and it was just as loud and I couldn't do anything about that one.

Power consumption is a topic I think I already stressed enough above, but it's worse than that. I actually kept the Powervault MD1000 and H700 card so I could add more storage to my desktop, but the Powervault uses about 75W even when I'm not doing anything... that's more than the vast majority of computers and that's just for storage. What I'm saying is the power consumption of this stuff can add up real fast. Be smart.

Finally, configuration: I went in circles trying to lower the consumption on the R910 to no avail, and the BIOS had a lot of power options. The 10Gbe Card was a nightmare in some respects to actually get 10Gbe throughput, and dealing with a mixture of SAS and SATA drives in the Powervault got a bit interesting too. I'm not saying any of these should stop you, just be prepared to put time into learning how to do the thing. Furthermore, the enterprise-y-ness of this gear really shined here. For example, having the H700 in my desktop makes it take about 10x as long to boot as a result of the added BIOS.

All of that aside, Consumer 10GbE cards are still well over $100, there's no other good way I could add as many drives as I did to my desktop as I did with the Powervault (to clarify, I moved the vault to my desktop after deciding I'd sell the server), and assuming I could afford the power, there's no way I could get that much compute with consumer gear without selling my soul.

Running The Server

Ironically, I didn't get all that much out of this. I've been using Linux as my daily driver for 5+ years now, and in a way where my system is half PC and half server, so I already knew most of how to server. So I'll keep this section brief.

Server Remote management is a thing- iDrac. Server's typically have the worstâ„¢ integrated GPUs, to the point where running even i3wm is painful. Just don't. Servers have weird specs- the R910 had 25w max power pci-e lanes. Why? Dell if I know. Servers start up so slow you may as well grab a sandwich while it thinks. Linux runs really well though, and I was able to basically use it just like every other linux system. This isn't really unexpected, just nice to know the difference in gear didn't matter; in fact, the gear was actually better supported in Linux than windows for things like the 10GbE card and and H700 storage card.

10GbE networking

Because this isn't supposed to be a guide on how to use or set up 10GbE cards or networks, I'm not going to get into the specifics, but what I will say is that when it works it's incredible. Getting it to work wasn't easy. I think the consumer gear would behave differently, to the point of plugnplay, but this wasn't like that at all. Even getting the 10GbE card to run at 10GbE on the Windows/client side was like pulling teeth. Ironically, if the client was booted into Linux as was well and speeds were great with almost no setup. I'm by no means implying this experience to be universal (though it is probably true more often than not), but I do think that if you're going to be running this gear running a server OS (I'm considering even desktop installs of linux to be server in this sense) will make hardware and network config much, much easier. Also worth noting, 10GbE networks, by their nature, do some things differently and benefit from advanced setup, be it by changing the MTU, or by optimizing for single port vs all, it's just a lot of playing with settings. I think it has it's uses, and can open a lot of cool doors, but ultimately I think it should be carefully weighed weather the cost of the consumer gear is worth it to avoid the headache.

Final thoughts

The R910 was like 150lbs, and was royally awful to move up and down stairs. Don't. Just. Don't.