Starting a homelab - Part 01
What is actually a homelab though?
Different people have different definitions on what a homelab is, but in a nutshell, it is any environment in your home network that allows you to test new technologies and decide whether you want to use them 'in production'.
Having a homelab has been extremely helpful for me as it allowed me to learn a lot about technologies that I haven't worked with before and it also allowed me to test stuff that I need for work before we actually put them in production. It also allowed me to separate my work/business stuff from my private stuff by utilizing virtualization. I'll touch that one a bit later. This will be quite a long text, and I will do my best to explain main elements (in my opinion) of building a homelab.
How to start?
For starters, you will need some computer stuff to begin with. At minimum, you will need a computer, and depending on what you want to achieve, you will need a computer with specific hardware components. What do I mean by that? Let's say that your goal is to have a virtualization station - to run multiple virtual machines on one hardware host. In that case, in my opinion, your main driver when purchasing/assembling such a machine should be RAM. It's always RAM! I currently have 5 hypervisor hosts and on each of them the main component that will always run up to its limits quite quickly is RAM. Unless you are running CPU-heavy stuff on your VMs (media server, encoding, etc.), it is very unlikely that you will be limited by your CPU for regular day-to-day stuff. Most hypervisors today are actually very good with managing the available CPU power and it is usually not a bottleneck in your homelab. Once again, I emphasize - this is only case if you're like me - the only major components in my homelab that are CPU hungry are:
- my docker server - because I run my Plex instance in docker - and mind you, if it's not transcoding anything (direct stream), the CPU usage is very low anyway
- my pfSense VM - because I have 1 Gbit/s symmetrical Internet at home I run my backup station from home; that for some reason heavily utilizes CPU on pfSense and even when I granted it more vCPUs, it was still running high. I'm not bothered with this because I'm switching to a hardware firewall in the near future anyway
And that's about it. I'm not doing anything else that's CPU hungry (like doing Folding@Home or similar stuff), so my CPU needs are minimal. Sometimes when I'm booting new stuff the CPU jumps high and then it goes back down once the VM stabilizes and, if you're running ESXi, VMware Tools start doing their magic.
In case you are wondering, this is my CPU usage on my docker VM in a 24hr period:
If you want to build your own homelab server and you want to use as little power as you can, check out this video by Techno Tim. He's very knowledgeable about this stuff and the video goes into detail on how to build a powerful yet low-power hypervisor station. Even though this video is Proxmox-centered, you will most likely be able to use the server with any other hypervisor as well. Please do keep in mind though that ESXi (by VMware) is a lot more picky on deciding on what hardware it will be gracious enough to run on. I don't have a lot of experience with Hyper-V, but in general, if you can run Windows Server on it, you can do Hyper-V.
If you're building your own server, one thing to keep in mind is that you buy a CPU that supports hardware virtualization - with Intel the technology you are looking for will be called Intel VT, while with AMD it will be either AMD-V or AMD IOMMU that you will be looking for. If your CPU doesn't support any of these technologies, well, then you may have to look for another guide. You can read more about hardware-assisted virtualization here.
Another good option is to buy a "real server" - and by that I mean something that was used in corporate environments but has been written off and is sold on eBay or companies like ServerMonkey, Xbyte and similar. Fun fact (or not that fun, I don't know :D) - I actually got my first server all the way from Texas from a company called METservers and had it shipped all the way to Toronto, Canada.
Please do keep in mind that since these servers were built for datacentre environments, they do come with a few caveats - unless you're buying the last gen servers, they will usually have some of the older tech in them (CPU, RAM speed, older and used HDSs, etc.), but that shouldn't be that big of an issue. They will still run extremely fast (especially if you're using something like SAS drives) and the best thing about it is that you can get some of the upgrades quite cheap - RAM, I'm talking about RAM - and since this is enterprise equipment, you don't have to worry about slots where to insert them. CPUs may not be the newest, but if you're running a dual-CPU server, it's very unlikely that you will ever suffocate them.
This is all beautiful, but I haven't mentioned a major caveat yet - and here it is - in most cases, they are VERY LOUD! My first enterprise server was a Cisco UCS server and I love that beast. But it is freakin' loud. And there's basically no way to make it silent. I'm not even talking about the startup sequence when most of the enterprise gear is extremely loud - I'm talking about a running server that's close to being iddle. So, if you live in an apartment/condo - stay away from Cisco UCS stuff, unless you have a dedicated room for it. I had it in my home office and I couldn't handle it so it's no longer there :) I recently purchased a Dell R630 server and I run it in my home office. I've set it up so that it runs the fans at 17% (almost can't hear it) and only when the CPU reaches certain temperature it kicks the fans back to higher speed. If you have to think about noise - consider buying a Dell R630!
PSA - if you're in Canada/Ontario - and looking to buy a used server, take a look at Delta Server Store in Scarborough.
Choosing a hypervisor
First of all - what is a hypervisor? There are, if I'm not mistaken, two types of hypervisors - bare metal (Type 1) and hosted (Type 2) hypervisors. Type 1 hypervisors are what we will be discussing in this series of posts. Think of the Type 1 hypervisors as operating systems (which they are) that interface between your VMs and your hardware. Type 2 hypervisors are running on top of an existing operating system, so they are basically an app that provides you with virtualization capabilities. This is a good choice if you want to spin up a temporary VM while the host computer is running and work in that VM for specific tasks. I use this when I have to go outside of my usual environments and I don't have broadband access to the Internet to connect to my own homelab environment.
Scenario: you're going to a client's site that doesn't have Internet access and you have to work on some ancient equipment that only runs on Windows 7 (OK, maybe not that ancient :D). The problem is that you only have a Mac computer. In that case, you could run something like Parallels or Virtualbox or VMware Fusion (they are considered Type 2 hypervisors) and spin up a VM of Windows 7 and then use that on-site. Good thing about Type 2 hypervisors is that you can usually pass-through external hardware quite easily to the VM (for example, you can easily pass almost anything connected to your USB port to the VM and use it in the VM as if it was directly connected to it, not the host machine - which is your Mac). Unfortunately, it will use your host system resources (it has to get its power from somewhere, right?), but that's unavoidable. You can alleviate that by using external storage for storing the hard disk image for the VM, but your CPU and RAM are coming from the host, of course. While I'm at it - PSA - if you have one of the newer Macs with an M1 chip, the only Type 2 hypervisor that currently works is Parallels Desktop and you will only be able to run VMs of ARM architecture. I believe that VMware Fusion is going to work in the future, but with limited guest OS support as well.
Just FYI, I will use the term hypervisor in the following text referring to Type 1 hypervisors.
So what do hypervisors actually do?
As I said, hypervisor is an operating system that runs on bare-metal, namely, directly on your computer. There is no intermediary OS other than the hypervisor itself that sits between your VMs and the hardware - there is a VM on top of a hypervisor that is running directly on the hardware. The benefit of this approach is that it will have much better access to system resources and it will also be able to manage them quite efficiently. In case of ESXi, the whole OS actually runs from your RAM!
Once the hypervisor boots up, it has complete control of the hardware of your server. At that point, you will need to connect to the management interface of your hypervisor. Usually that is done via browser by pointing it to the IP address of your server.
Another PSA - in case of VMware, the naming convention used for different things can be quite confusing - I will mostly use the term ESXi in this and the following posts, but just to make it clear - ESXi is the name of the VMware's Type 1 hypervisor, vSphere client is the interface that you use to connect to the web interface of ESXi and vCenter (will talk about this one in the following posts) is an appliance that runs on top of ESXi in a form of a VM that is actualy used to manage ESXi servers in a single pane of glass form.
I will not focus on advanced topics in this or the following posts relating to VMware (as I'm no expert in it, just a little bit more advanced user than most) and will try and focus mostly on the ESXi free version. Also, please keep in mind that ESXi can be licensed for free and even though it comes with certain limitations, none of them are serious enough to drive you away from using it.
If you're going to build a more complex environment with multiple servers and will require things like central management, high availability, live migration of VMs and similar stuff, you may need to either buy vCenter and license your ESXi for Enterprise Plus or move to another Type 1 hypervisor (Proxmox).
If you're doing stuff for your homelab only and are not running stuff for production, you may want to consider subscribing to VMUG and pay for 'Advantage Membership' where you will get access to a bunch of VMware software that you can use in your homelab, as long as you're not using it for production (basically to make money out of it).
Make your own choice
It is not that difficult of a choice actually, even if it may seem like one. If your goal is to learn about technologies that are used in most corporate environments, there are basically two choices - either ESXi (vSphere) or Citrix (Xen). There's also Hyper-V that a lot of people really like, but I've never used it in my homelab longer than a few days and I have almost no experience with it in corporate environments. From what I've also read, Microsoft (Hyper-V) is looking to discontinue it in the near future and move everything into their cloud offering (Azure). I don't know the details here so take this with a grain of salt, please.
If you're building a homelab to consolidate stuff in your home network, then either ESXi or Proxmox are both good choices. I don't have a lot of experience with Proxmox, but what little I do have - I like it. Also, please keep in mind that at the time of writing this article (June/2022), VMware was purchased by Broadcom and there are already signs that VMware may be moving into a different direction in the near future - what does that mean about their free hypervisor offering (ESXi) - no one really knows at this point. One good thing here is that once you set up your ESXi instance, you can basically run it without updating (NOT something that I recommend, just saying that it's possible) until your hardware dies. Since you're building a homelab here, you can definitely get away with a lot of stuff that wouldn't be allowed in corp environments.
Just as a sidenote: another - legitimate - reason why people don't update their servers and have them running at their current version is because hardware support has been discontinued in the new version (happens too often) and upgrading it will actually break the machine. So, if you're happy with the way your server is running, just leave it as it is. Please do keep in mind that security patches are usually separate from major version updating and you should definitely look into that.
With all this said, the choice is, as always, up to you. I run ESXi on my servers, but that is because I've been using it for past 10 years and I'm just used to it. If their licensing changes in the future, I will have to reconsider my choice and most likely move to Proxmox - which is not a bad thing at all, it will just mean that I will have to make some adjustments in my environment. Until that day, I will (most likely) continue to run stuff on ESXi.
Network stuff
Since I'm a network engineer by vocation, this is something that I was very careful when I was building my own homelab. For most users though, this will be a pretty simple setup. You have a few options in most cases.
Option 1
Connect your new server via Ethernet cable to your home router. Most likely you have DHCP running on it so the server will pick up the IP address from it during installation.
- Pro: Simplicity
- Con: All of your VMs will be part of the same local network controlled by your home router. Unless you have another ethernet adapter in your server, you will not be able to segregate VMs into different networks.
Option 2
You have a managed L2 switch (or L3, doesn't matter) between your server and your home router. This and variations of this approach are what gives you most sub-options on how to build your environment. If your server has multiple NICs, even better, but even if you only have one NIC, it's not that big of a deal because you can utilize native VLAN capabilities or if you're not familiar with that concept, you can tag your management VLAN for ESXi as well as all other VLANs that you will use in your environment. This approach allows you to segregate your networks based on different criteria.
- Pro: Flexibility and capable of advanced setup
- Con: Complex and somewhat advanced
We will be working with Option 2 in the following posts, as it gives us most flexibility when building out a homelab.
Conclusion
I will be working on future posts, but general idea is to cover following topics:
- EASY: ESXi installation and basic setup
- EASY: Your first VM in ESXi
- MEDIUM: Your first Linux server in ESXi
- MEDIUM: Your first Windows server in ESXi
- ADVANCED: Set up DNS for your homelab
- MEDIUM: Building a docker server from scratch
- MEDIUM: Docker container management
- ADVANCED: Reverse proxying in docker
- MEDIUM: Build a multimedia machine in docker
- ADVANCED: Virtualize your firewall and route traffic through it
- ADVANCED: Authenticate Linux VMs against Active Directory Server
- ADVANCED: Enable SSH Key-Based Authentication on Linux VMs
- ADVANCED: Configure Squid proxy on your Linux server
- ADVANCED: Use public VPN service (NordVPN) for your home traffic
- ADVANCED: Self-host a PBX and connect it to public PSTN
- ADVANCED: Secure your environment with 2FA
- ADVANCED: Use Gitea for docker compose version control
- ADVANCED: Use Ansible to automate VM management tasks
- ADVANCED: Utilize your NAS in your homelab
- ADVANCED: Set up an IPsec tunnel between locations
I will try and post updates regularly - if you are interested in these topics, please comment below. Also, if you would like me to cover any other topic, please leave a comment as well and I will see if that is something that I can do.
Also, take a look at the following Reddit communities:
Until next time, stay safe and see you soon!
Member discussion