BYTE23 NETWORKS

History of the home-grown "data center"

Current state of the rack

It all started when my friends and I got interested in UNIX and programming at school during the dot-com bubble. Back in the day we worked on our own little server, to host e-mail, IRC, source code and websites. Stefan, Christian and I were active on the Darksystem IRC Network and associated bulletin boards. We were driven by curiosity and wanted to learn how the internet works, and enjoyed the likeminded (hacker-) community. While Stefan got his hands on some decomissoned Sun Sparc Stations, I had the opportunity to run a (Pentium I) server with an uplink sponsored by my dad. Since then, much time has passed, but the fascination with servers and networks remained.

When Katharina and I moved into our new house (2015), I made room in the basement for a 19" server rack. Primarily, I wanted to be able to house my network equipment for the home LAN, but also I wanted a proper space to run my homelab and a NAS for backups.

The rack The rack
The first iteration of the rack, still featuring the old Netgear switch, and one of the APU boards as a firewall...

Chapter 1 - the tale of two Suns

As a huge fan of Sun Microsystems, I decided to buy the most recent servers of the brand that I can come across on eBay. So I purchased a Sun Fire X4140 and a Sun Fire X4150. The child in me was completely amazed and fascinated by the noise of the machines and the endless possibilities to make use of combined ~36 cores, 128 GB RAM and a bunch of SAS disks.

However, the grown-up in me recognized that (a) those beasts are way too loud, even for my closed basement setup and (b) way to power hungry / costly. So after a year I ripped them out of the rack, as the cost/benefit calculation didn't make any sense...

Two Sun Fire X41xx servers
RIP little Sun beasts...

Chapter 2 - let's go green

The situation changed, when we added the solar power plant to our house, in order to cover (most of) our daily energy need. With the ability to produce ~ 10 kWp on a sunny day, and to store another 10 kWh in the battery it just occured to me, that the only thing missing to operate a small data center was an internet connection with a static IP (net). So I ordered a business connection from my local ISP that provided me with a public IPv4 as well as a /56 IPv6 subnet.

A house with solar panels on the roof
10 kWp Solar Panels on the roof of our house

From my earlier experiment with the older Sun servers it became clear to me that I wanted to invest into modern CPUs, in order to keep the energy consumption at a minimum, whilst having enough power and features to support workloads, such as Kubernetes in my lab.

Chapter 3 - artisanal handcrafted servers

Easter 2018, Stefan, Christian and me assembled the first of the servers. I opted for a Low Energy Intel XEON v5 Quad Core CPU, a Supermicro case, 64 GB ECC Ram, and a bunch of SSDs as well as 2.5" spinners.

Parts of a server
Parts for the first server

The firewall for the "data center" uplink, was already there. Earlier I purchased two (really 3) PC Engines SoC boards featuring Quad-Core AMD G-Series CPUs with 4 GB RAM, and 3x Gigabit Intel i210 ports. Perfect fit for an OpnSense firewall appliance. They were mounted with dual cases that can fit two boards in a single 1HE 19" rack case.

APU board in 19-inch case
APU boards in a dual case running OpnSense

First these systems have been running only my homelab and a fileserver. I had VMs on the server which were hosting the Atlassian suite for my own development / learning and some self-organizing around the house. On the RAID I stored my Mac Book backups via netatalk. Later more workloads from friends & family were added.

Server rack 2018
Second iteration of the server rack, shortly before installing the second firewall

The new server was running FreeBSD with an encrypted software RAID-Z2 on 8 x 2TB spinners. To bump the performance a little, it also had an M.2 NVMe acting as ARC for ZFS. At some point the whole thing crashed with out of memory and destroyed my encrypted ZFS RAID. Going forward, a lot had to change. Being a professional in IT for 10+ years, I knew that my opeations was far from great, but that I could do better than that.

Chapter 4 - more servers please

Easter 2019, Stefan, Christian and me built the second server, based on a SuperMicro Atom Board, which had 32 GB RAM, Quad Intel NIC, IPMI and enough performance on paper to run a software RAID. Spoiler: it didn't though...

Server rack 2019
Third iteration of the server rack, featuring the Xeon and Atom server

I switched both system to Gentoo and used KVM for virtualization. The VMs had local storage with a RAID10 of SSDs. The Xeon server ran very well, and in fact I still like to think about the Gentoo setup. The Atom file server however was not performing very well.

Software RAID-6 on Gentoo
Software RAID-6 on Gentoo.

It seemed as if the load wasn't meant for that CPU. During time machine backups frequently the AFP connections were lost, IO was hanging, transfer rates were slow. After a lot of tuning, I decided to replace the file server, and at the same time put a more capable CPU in the virtualization host, which was still featuring the v5 low energy chip. The workloads increased, for instance as Volker was now running a Wiki for his work as a teacher, and other firends were asking for Confluence and Jira spaces for personal projects. So some extra compute power was appreciated.

Chapter 5 - the red hat

Around the same time my employer bought a "little" company called "Red Hat" ;-) And I was excited to get my hands on some of the products. So to learn quickly I decided that I would remodel my home lab with Red Hat upstream projects, such as CentOS (RHEL), Foreman (Satellite), oVirt (RHEV), FreeIPA (Red Hat IDM) and more. Additionaly being a software architect, I wanted to run OKD (OpenShift). As a major improvement for resiliency, I decided to introduce a separate file server and and finally get rid of the Netgear switch which was annoying me for not implementing 802.3ad correctly.

So Easter 2020, Stefan and me rebuilt the servers. The virtualization host got a new Xeon v6 CPU, and the older Xeon v5 went into the file server. The new fileserver also got a hardware Broadcom LSI MegaRAID card. Finally, the Netgear switch was replaced by a HPE Aruba 2530-48G, which I bought used, and sadly has a rather ugly crack in the logo piece - but well, I don't care too much. We also attached a large USB-3 drive to the fileserver, to mirror the data RAID as a low-fi backup.

Server rack 2020
Fourth iteration of the server rack, installing CentOS on the Xeons

The VMs were now powered by oVirt, which ran local storage on a RAID10 SSD. The VMs are mirrored as OVA images nightly to the storage server, via NFS and are then mirrored again to USB. On the VM layer, I set up a small cloud. FreeIPA is used as a kerberos controller for single-sign-on on the OS level, as well as a PKI for all my TLS certs. Foreman is managing the VMs and the software repositories. I can also provision new VMs through Foreman fully-automatic on the oVirt hypervisor. One of the APU boards was configured as an internal DNS and DHCP server, that is managed by Foreman as well. Of course, all the RAIDs on both servers are encrypted with AES256.

Additionally I moved my mail server from host europe (who had too many data security incidents for my taste in the last years) onto a VM. Knowing that I will have down-times due to my limitations, I set up a backup MX host on a VPS. Being so thoughtful with my own gear, I didn't want to put this host on AWS or another big cloud, but support an eco-friendly hoster. Having a faible for Switzerland I decided to go with folks at ungleich.ch who operate a carbon neutral data center in old industry buildings, powered by a water power plant. Just like me they work with low density servers and thus don't need any active cooling, which further saves CO2.

Current hardware

Servers are named after space shuttles, Network provided by HPE 2530-48G

gateway

PCEngines APU Quad-Core AMD, 4 GB RAM
OpnSense Router, Firewall, Intrusion Detection & Prevention
Uplink by Telekom DeutschlandLAN with Digitalisierungs-Box in modem-only mode.

discovery

Intel Xeon E3-1240 v6, 64 GB ECC RAM,
2x Intel SSD RAID1, 4x SanDISK SSDs RAID10, Corsair M.2 NVMe
CentOS 7, oVirt, Foreman, FreeIPA, Bitbucket, Jira, Confluence, Postfix, Mediawiki, ...

challenger

Intel Xeon E3-1240L v5, 32 GB ECC RAM,
Broadcom MegaRAID SAS 9361-8i, 8x 2TB Seagate Barracuda RAID5, Corsair M.2 NVMe
Seagate 10TB USB Backup drive
CentOS 8, AFP, NFS

endeavour

PCEngines APU Quad-Core AMD, 4 GB RAM
CentOS 7, Foreman Proxy, DNS, DHCP

columbia

VPS at ungleich.ch
CentOS 8, Postfix