Blog Home

Provisioning Server + Metrics Monitoring

April 20, 2025

Hardware

To start the project, I wanted a cheap mini-PC to experiment with. I chose the Beelink S13 because it was on sale ($200) and has reasonable specs (Intel N150, 16GB RAM, 1TB SSD). The Beelink brand seemed reasonably well-liked based on reviews and some listicles I checked out. In any case, my intention is to add more compute nodes later to form a cluster, so all I really needed was something I could get started with.

Provisioning

When I received the server, it had Windows 11 on it. Windows is... not my favorite environment to work in, and not that well-suited to software development, so I decided to install Ubuntu Server instead. Specifically, I went with Ubuntu Server 22.04.2, the latest Long-Term Support (LTS) version available. I followed this guide to create a bootable USB stick, plugged it into the Beelink, and rebooted while holding down a specific key on the keyboard that let me choose what drive too boot from, and then after a few minutes of playing "answer the prompts", I had Ubuntu Server running!

My first goal was to be able to remotely adminster the server from my laptop over SSH, so I set about figuring out the network setup. I plugged the Beelink into my router but couldn't get a network connection, until I realized I needed to activate the ethernet interface. Turns out these servers require a lot of explicit instruction on what you want them to do! Once I got the port enabled, I was able to do a basic connectivity test (ping 8.8.8.8, which is Google's DNS server). After that, I had to install ssh and then allow ssh access on port 22. While I was doing this, I went down a tangent reading Ubuntu's guide to security and got a little more familiar with the Universal Firewall (ufw). After I satisfied my curiousity, I checked the server's IP address with ip a and then on my laptop I ran ssh jmassucco@<ip_address> and I was in! To make things a little easier, I assigned the Beelink a hostname (ubuntu-server-1) and after a little experimenting, figured that I could access that from my laptop at ssh [email protected]. Now that I could reliably access the server, I disconnected it from my monitor and stowed it in a nondescript back corner of my desk.

Monitoring

The first real project I wanted to do was to setup self-health monitoring of the server. I was already familiar with Prometheus (at least in concept) and have used Grafana quite a bit, so decided to go with that combo. I knew from the beginning that I wanted to use Docker to manage containers for each of the services running on my server, so with the help of ChatGPT, I got a simple docker-compose file written to deploy Prometheus and node-exporter.

Prometheus is basically a data scraper and database system which is optimized for collecting metrics. So, by itself it doesn't measure anything. But the Prometheus project provides a service called node-exporter which records standard metrics like CPU and RAM usage and makes them available to Prometheus. So with the two of those deployed together, I was able to view metrics on my cluster in Prometheus at http://ubuntu-server-1.local.:9090/. But I wanted nice plots that I could save and view together, and for that I needed Grafana.

Grafana is a data visualization tool that can be connected to any number of backing databases. You can then write queries to those databases and save them in Grafana Panels. Put a bunch of panels together, and you've got a Dashboard which provides a fixed set of queries plotted as graphs to visualize a wide array of data in a simple and familiar view. You can change the time range and the panels automatically query the database(s) for the right data, and you can also set auto-refresh so that you can view live data as it comes in. Grafana also has a huge library of publically saved dashboards that can be easily imported to your instance, so it was a total breeze to set up Grafana (docker-compose by ChatGPT) and then import a few public dashboards built for Prometheus + node-exporter and pick my favorite one, and I was viewing live metrics on my cluster in beautiful graph plots!

After I got the metrics set up, I realized I also wanted to see metrics oriented around my Docker containers, and I found the Google open-source cadvisor (short for Container Advisor) that aggregates docker container resource utilization stats and makes them available to Prometheus. After a quick docker-compose update and redeploy, and another public dashboard import to Grafana, I was viewing these metrics as well!

At this point, I was very happy with my setup. I decided that my next step was to make what I'd built so far available on a public website (jamesmassucco.com). But I was out of daylight, so decided it was a project for another day.