move tutorials to new repo

This commit is contained in:
nihilist 2025-05-06 15:58:25 +02:00
parent df0647a632
commit ccf5a7caf9
1956 changed files with 25133 additions and 0 deletions

View file

@ -0,0 +1,250 @@
---
author: None
date: 2025-02-07
gitea_url: "http://git.nowherejezfoltodf4jiyl6r56jnzintap5vyjlia7fkirfsnfizflqd.onion/nihilist/blog-contributions/issues/0"
xmr: None
---
# **Anonymous Server Monitoring**
## What is server monitoring?
When deploying compute resources (bare-metal, VPSes or more abstract work units) you will have to manage a living system. This system will **always** have the following characteristics:
* Limited ressources: the amounts of RAM and CPU cycles, network bandwidth as well as storage space are neither infinite nor free.
* Evolving requirements: depending on how you use your services, how many concurrent users you have you might need more or less ressources than what you initially purchased
* Nominal operating parameters: range of RAM and CPU use, temperatures and so on in which your service performs as expected
The first item is fixed and only linked to your financial constraints. The other two are constantly evolving and thus must be **monitored**.
## How do I do it?
How you monitor your systems can vary based on your technical requirements. It can be as simple as logging in once a week, check the output of some diagnostic command and calling it a day.
This will give you a snapshot but you will miss a lot of important information.
You can also set up a complicated system that reports current metrics, trends and gives you capacity planning alerts based on the data obtained! You will have to find the middle-ground yourself, this article will propose one that you can tweak whichever way you need.
## Risks of doing it improperly
Accessing your server for monitoring purposes is, from a risk perspective, pretty much the same as doing any other administration task or interacting with the services hosted therein. If done improperly (say logging in over the clearweb from your home address) you've just given anyone looking an undeniable link between your overt identity and your clandestine activities. (which should never happen since you're supposed to [segment your internet uses](../internetsegmentation/index.md))
A **fail-closed** system is what you should strive for: opsec best practices should be the default and if there's a technical issue preventing you from following them (attack on tor, flaky network, client or server-side misconfiguration) the system should prevent access at all in order to keep you safe.
## What if I don't monitor my Servers ?
If you don't properly monitor your infrastructure you will face the following consequences sooner or later:
* service instability: you won't notice when things start going awry
* costs overrun: you will end up paying more than you need to in order to deliver the same service
* undetected attacks: attacks that impact your services can go unnoticed when the cues (eg: high RAM consumption from a cryptojacking) are not picked up
* And lastly, if you are going to run a sensitive service on a remote server, it will anyway be on borrowed time as [we have explained previously](../cloud_provider_adversary/index.md), therefore you need to be able to easily detect whenever there is a downtime on one of the servers, while at the same time maintaining your anonymity.
# **Risks**
Whenever you connect to your server, such as for monitoring or other administrative tasks, if you do so without going through Tor, then the cloud provider knows that you are the one connecting to that server. Even when using SSH you will leave a trail of metadata all the way back to your access point. That might be enough to get your door busted down the line if you intend on hosting anything sensitive on that server.
In the following part of the post we will look into how to set up advanced monitoring tools so you don't have to keep an eye on a bunch of tmux sessions with glances/top open in order to know the behaviour of your systems over time.
This tutorial will assume that you have acquired servers anonymously via non-KYC cloud providers, and that you are only accessing them anonymously through tor See [this article](../anonymousremoteserver/index.md) if you have not already.
...
...
Done? Let's proceed.
# **Target Architecture**
First, let's have a look at the network topology we'll be building:
![](architecture.png)
* Our whonix workstation will connect through tor to a central monitoring server in order to access the grafana dashboard containing our monitoring data.
* Our monitoring server will itself connect through tor to the target monitored servers using prometheus
# **Setting up the central monitoring server**
First you want to set up your central monitoring server. For ease of use and better performance we are going to colocate the prometheus collector along with grafana.
## Required installation
To get started we need the following software on the machine:
* Tor: anonymize traffic
* prometheus: aggregate metrics
* prometheus-node-exporter: export local server metrics
* docker: to run grafana
![](install.png)
## Tor Configuration
### On the target server to be monitored
run the following as root to create a hidden service for the prometheus collector
apt update
apt install prometheus-node-exporter tor
systemctl stop tor #stop the tor service
mkdir -p /var/lib/tor/onion/prometheus/authorized_clients #create the client auth keys folder to store our second layer of authentication
chmod 400 -R /var/lib/tor/prometheus #set restrictive file permissions
vi /etc/tor/torrc #edit the torrc file to add content
cat /etc/tor/torrc
AutomapHostsSuffixes .onion,.exit
DataDirectory /var/lib/tor
SOCKSPort 127.0.0.1:9050 IsolateDestAddr
HiddenServiceDir /var/lib/tor/onion/prometheus
HiddenServicePort 9100 127.0.0.1:9100
tor-client-auth-gen
private_key=descriptor:x25519:DBQW3GP5FCN2KQBDKTDKDAQUQWBEGBZ5TFYJE4KTJFBUOJPKYZBQ #paste this key to your local machine as your prometheus node will need it
echo "descriptor:x25519:6HDNHLLKIFNU5Q6T75B6Q3GBYDO5ZF4SQUX7EYDEKWNLPQUWUBTA" > /var/lib/tor/onion/prometheus/0.auth
chown debian-tor:debian-tor -R /var/lib/tor # make tor owner of this folder
systemctl start tor #restart tor
systemctl status tor #check that everything works
cat /var/lib/tor/onion/prometheus/hostname
[clientaddr].onion
What's that tor-client-auth-gen you ask? In order to protect this critical service from attacks that could be done against the grafana servers or from stolen credentials we need more than just security by obscurity (relying on the attacker not knowing our hidden service address).
When a client tries to connect to an onion service they request a server descriptor from a tor directory server that gives them a path to a rendez-vous point where they will be able to talk to each other. The keys we just created will be used to encrypt this descriptor. Without the proper private key, even with the onion service address, an attacker won't be able to connect to it because they won't be able to find the rendez-vous point.
This is better than basic-auth for the following reasons:
* More resistant to bruteforce attacks
* Also protects against flaws in your application itself
* Also protects you from fingerprinting attacks as no trafic can reach you without the required secret key
### On the central monitoring server
The prometheus collector will only be accessed locally by grafana so it doesn't need to be accessible over tor. Grafana, on the other hand, does.
Let's start with tor's configuration, use the following commands as root:
sudo systemctl stop tor #stop the tor service
mkdir -p /var/lib/tor/auth_keys #create the client auth keys folder to store our second layer of authentication
mkdir -p /var/lib/tor/onion/grafana #create the client auth keys folder to store our second layer of authentication
chmod 400 -R /var/lib/tor/auth_keys #set restrictive file permissions
#line below will allow your aggregator to connect to your monitored server. Without it no requests can even reach it
echo "[prometheusclientaddr].onion:descriptor:x25519:DBQW3GP5FCN2KQBDKTDKDAQUQWBEGBZ5TFYJE4KTJFBUOJPKYZBQ" > /var/lib/tor/auth_keys/prometheus_server.auth_private
chmod 400 -R /var/lib/tor/onion #set restrictive file permissions
vi /etc/tor/torrc #edit the torrc file to add content
cat /etc/tor/torrc
AutomapHostsSuffixes .onion,.exit
DataDirectory /var/lib/tor
SOCKSPort 127.0.0.1:9050 IsolateDestAddr
HiddenServiceDir /var/lib/tor/onion/grafana
HiddenServicePort 80 127.0.0.1:3000
ClientOnionAuthDir /var/lib/tor/auth_keys
tor-client-auth-gen
private_key=descriptor:x25519:YCPURSYN4FL4QKQSXFTGLYNBHOVVRCQYRZLFHMZFCUFU5R6DCRMQ
public_key=descriptor:x25519:UUQW4LIO447WRQOSRSNDXEW5NZMSR3CYOP65ZIFWH6G2PUKWV5WQ
echo "YCPURSYN4FL4QKQSXFTGLYNBHOVVRCQYRZLFHMZFCUFU5R6DCRMQ" > ~/mygrafana_auth_key
echo "descriptor:x25519:UUQW4LIO447WRQOSRSNDXEW5NZMSR3CYOP65ZIFWH6G2PUKWV5WQ" > /var/lib/tor/onion/grafana/0.auth
chown debian-tor:debian-tor -R /var/lib/tor # make tor owner of this folder
systemctl start tor #restart tor
systemctl status tor #check that everything works
And that's all you'll need! one hidden service for grafana.
You'll find your hostname in /var/lib/tor/onion/grafana/hostname.
## Prometheus server configuration on the central monitoring server
Clean and simple: we scrape our server every 10s for new data, configure a proxy URL so scraping happens over tor, using our socksport and configure ou scraping targets.
vi /etc/prometheus/prometheus.yml
cat /etc/prometheus/prometheus.yml
alerting:
alertmanagers: []
global:
scrape_interval: 10s
remote_read: []
remote_write: []
scrape_configs:
- job_name: remote-nodes
proxy_url: socks5h://localhost:9050
static_configs:
- labels: {}
targets:
- **[clientaddr].onion:9100**
- job_name: local-node
static_configs:
- labels: {}
targets:
- localhost:9100
This configuration will make the central monitoring server behave in the following way:
* Scrap itself directly to collect its own data (prometheus is only exposed on loopback for this)
* Scrap the target monitored server through tor via the socks proxy
## Grafana configuration on the central monitoring server
Let's start grafana and make it available: as root, as tor is already configured we can then access it through our torbrowser
docker run -d -p 127.0.0.1:3000:3000 --name=grafana grafana/grafana
# **Connecting to our grafana instance**
On your monitoring server you can find your hostname at /var/lib/tor/grafana/hostname. Use it in the tor browser to reach your instance. You will be prompted for your private key, you can find it where you saved it earlier ~/mygrafana_auth_key, paste the part after ![](grafana_login.png)
# **Configuring the data sources**
Next we need to tell grafana to use prometheus as a data source:
![](add_datasource.png)
Now, let's configure it (specifying localhost:9090 as the API port)
![](datasource_config.png)
And Voila! we have simple system monitoring over tor in a dashboard:
![](example_dashboard.png)
# **Conclusion**
In this article we saw why and how you need to implement anonymous server monitoring for your infrastructure. If you are running hidden services with any form of sensitive data stored on them, having them under constant monitoring is a must because you need to detect downtimes quickly, without compromising your identity or the rest of your infrastructure.