Grafana Monitoring with OSS Tools

2025-02-23

Why I Needed Monitoring

I'm currently developing a couple of side projects — EventHive.tech and Grupovi.bg. As both started accumulating real traffic, I figured it was time to set up proper monitoring on the servers.

The projects are hosted on AWS, but I decided to go with Grafana as my main monitoring tool instead of AWS CloudWatch. I started exploring the current landscape and what would fit best. These aren't microservices — they're Laravel monolith applications — so something like OpenTelemetry felt like overkill. I was already familiar with InfluxDB, Prometheus, and Elasticsearch, but I wanted to try something newer and more cohesive.

After some research I landed on my ideal stack:

  • Grafana — dashboards and visualization
  • Mimir — long-term metrics storage (Prometheus-compatible)
  • Loki — log aggregation
  • Alloy — lightweight agent for collecting and shipping metrics and logs

I also decided that everything on the monitoring server would run in Docker containers to keep things clean and dependency-free.


Setting Up Grafana

First, I created a Docker volume so dashboard data survives container restarts:

docker volume create grafana-storage

Then I spun up the Grafana container:

docker run -d -p 3000:3000 --name=grafana --volume grafana-storage:/var/lib/grafana -e "GF_SERVER_ROOT_URL=https://grafana.example.com/" grafana/grafana

A quick docker ps confirmed it was running. Next, I needed to expose the instance publicly so I could access it from anywhere.

Nginx Reverse Proxy

I created a simple Nginx config at /etc/nginx/sites-available/grafana.conf that proxies traffic to port 3000:

server {
    listen 80;
    listen [::]:80;
    server_name grafana.example.com;

    location /.well-known/acme-challenge/ {
        root /var/www/letsencrypt;
    }

    location / {
       proxy_pass http://127.0.0.1:3000;
    }
}

I enabled the config with:

sudo ln -s /etc/nginx/sites-available/grafana.conf /etc/nginx/sites-enabled/grafana.conf

Adding HTTPS

I generated SSL certificates with Certbot:

sudo certbot certonly --webroot -w /var/www/letsencrypt -d grafana.example.com

Then updated the Nginx config to handle HTTPS on port 443 and redirect HTTP. The final version looked like this:

server {
    listen 80;
    listen [::]:80;
    server_name grafana.example.com;

    location /.well-known/acme-challenge/ {
        root /var/www/letsencrypt;
    }

    location / {
        return 301 https://$host$request_uri;
    }
}

server {
    listen 443 ssl http2;
    listen [::]:443 ssl http2;
    server_name grafana.example.com;

    ssl_certificate     /etc/letsencrypt/live/grafana.example.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/grafana.example.com/privkey.pem;

    ssl_session_timeout 1d;
    ssl_session_cache shared:SSL:10m;
    ssl_protocols TLSv1.2 TLSv1.3;
    ssl_prefer_server_ciphers on;

    client_max_body_size 20m;

    location / {
        proxy_pass http://127.0.0.1:3000;

        proxy_set_header Host              $host;
        proxy_set_header X-Real-IP         $remote_addr;
        proxy_set_header X-Forwarded-For   $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;

        proxy_http_version 1.1;
        proxy_set_header Upgrade           $http_upgrade;
        proxy_set_header Connection        "upgrade";

        proxy_read_timeout  300;
        proxy_connect_timeout 300;
        proxy_send_timeout 300;
    }
}

A quick test confirmed everything was working fine under HTTPS.


Setting Up Mimir (Metrics Storage)

Next up was Mimir. I followed a similar approach — create the directories and a Docker volume:

sudo mkdir -p /srv/mimir/data

I had to change the ownership so the container user can read the files:

sudo chown -R 10001:10001 /srv/mimir

We use 10001 as the UID because that's the default user inside Grafana Labs containers. Then I created the volume:

docker volume create mimir-data

Mimir Configuration

I created the config file at /srv/mimir/mimir.yaml:

multitenancy_enabled: false

server:
  http_listen_port: 9009
  grpc_listen_port: 9095

common:
  storage:
    backend: filesystem
    filesystem:
      dir: /data

blocks_storage:
  backend: filesystem
  filesystem:
    dir: /data/blocks

compactor:
  data_dir: /data/compactor

distributor:
  ring:
    kvstore:
      store: inmemory

ingester:
  ring:
    kvstore:
      store: inmemory
    replication_factor: 1

limits:
  max_global_series_per_user: 1000000

With the config in place, I started the container:

docker run -d --name mimir -p 9009:9009 -v $(pwd)/mimir.yaml:/etc/mimir/mimir.yaml:ro -v mimir-data:/data grafana/mimir:latest -config.file=/etc/mimir/mimir.yaml -target=all

A quick docker ps to verify, and we're good to proceed. The next steps were the same as Grafana — Nginx config for port 80, Certbot for certificates, then open port 443.

Important: Secure Your Mimir Instance

NEVER expose Mimir to the public without authentication. Always put it behind basic auth or restrict access by IP. I set up basic authentication like this:

sudo htpasswd -c /etc/nginx/.htpasswd-mimir username
sudo chmod 640 /etc/nginx/.htpasswd-mimir
sudo chown root:www-data /etc/nginx/.htpasswd-mimir

If you're missing htpasswd, install it first (Debian/Ubuntu):

sudo apt-get update && sudo apt-get install -y apache2-utils

Here's the final Nginx config for Mimir with basic auth enabled:

server {
  listen 80;
  server_name mimir.example.com;
  return 301 https://$host$request_uri;
}

server {
  listen 443 ssl http2;
  server_name mimir.example.com;

  # TLS certs (use your Let's Encrypt paths or your own)
  ssl_certificate     /etc/letsencrypt/live/mimir.example.com/fullchain.pem;
  ssl_certificate_key /etc/letsencrypt/live/mimir.example.com/privkey.pem;

  client_max_body_size 50m;       
  proxy_read_timeout 300s;
  proxy_send_timeout 300s;

  # Basic Auth (recommended)
  auth_basic "Mimir";
  auth_basic_user_file /etc/nginx/.htpasswd-mimir;

  # (Optional but recommended) allowlist your writers
  # allow 203.0.113.10;
  # allow 198.51.100.0/24;
  # deny all;

  location / {
    proxy_pass http://127.0.0.1:9009;
    proxy_http_version 1.1;

    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto $scheme;
  }
}

Setting Up Loki (Log Storage)

The process for Loki should feel familiar by now. Start with creating the directories:

sudo mkdir -p /srv/loki/{config,data,wal}
sudo chown -R 10001:10001 /srv/loki

Loki Configuration

Create the config file at /srv/loki/config/loki.yaml:

auth_enabled: false

server:
  http_listen_port: 3100

common:
  path_prefix: /loki
  storage:
    filesystem:
      chunks_directory: /loki/chunks
      rules_directory: /loki/rules
  replication_factor: 1
  ring:
    kvstore:
      store: inmemory

schema_config:
  configs:
    - from: 2024-01-01
      store: tsdb
      object_store: filesystem
      schema: v13
      index:
        prefix: index_
        period: 24h

storage_config:
  filesystem:
    directory: /loki

ingester:
  wal:
    enabled: true
    dir: /wal

limits_config:
  retention_period: 168h   # 7 days; adjust or remove if you don't want retention
  allow_structured_metadata: true

compactor:
  working_directory: /loki/compactor
  retention_enabled: true
  delete_request_store: filesystem

Then spin up the container:

docker run -d --name loki --restart unless-stopped -p 3100:3100 -v /srv/loki/config/loki.yaml:/etc/loki/loki.yaml:ro -v /srv/loki/data:/loki -v /srv/loki/wal:/wal grafana/loki:latest -config.file=/etc/loki/loki.yaml

After that, follow the same procedure as Mimir — set up basic authentication (with a separate password file), configure Nginx for port 80, generate certificates with Certbot, and open port 443.

With that, all the storage components were up and ready to receive data.


Setting Up Alloy (Collection Agent)

Now it was time for Alloy — the agent that runs on the server you want to monitor, collecting metrics and logs and shipping them to Mimir and Loki.

This time I installed it directly on the host instead of using a container. You can also run Alloy as a sidecar container if your application runs on Docker or Kubernetes.

Installation

Following the official Alloy documentation:

wget -q -O - https://apt.grafana.com/gpg.key | sudo gpg --dearmor -o /usr/share/keyrings/grafana.gpg
echo "deb [signed-by=/usr/share/keyrings/grafana.gpg] https://apt.grafana.com stable main" |   sudo tee /etc/apt/sources.list.d/grafana.list
sudo apt-get update

Then install it:

sudo apt-get install alloy

Alloy Configuration

I'll be honest — I had to restart the Alloy service many times before getting the configuration right. But it all worked out in the end. The config lives at /etc/alloy/config.alloy (you need root access to edit it).

Here's what mine looks like:

logging {
  level = "warn"
}

prometheus.exporter.unix "default" {
  include_exporter_metrics = true
  disable_collectors       = ["mdadm"]
}

prometheus.scrape "default" {
  targets = array.concat(
    prometheus.exporter.unix.default.targets,
    [{
      // Self-collect metrics
      job         = "alloy",
      __address__ = "127.0.0.1:12345",
    }],
  )

  forward_to = [
  ]
}

prometheus.exporter.unix "node" {}

prometheus.scrape "node" {
  targets    = prometheus.exporter.unix.node.targets
  forward_to = [prometheus.remote_write.mimir.receiver]
}

local.file_match "system_logs" {
  path_targets = [
    { __path__ = "/var/log/syslog" },
    { __path__ = "/var/log/auth.log" },
  ]
}

loki.source.file "system" {
  targets    = local.file_match.system_logs.targets
  forward_to = [loki.write.default.receiver]
}

prometheus.remote_write "mimir" {
  endpoint {
    url = "https://mimir.example.com/api/v1/push"

    basic_auth {
      username = "username"
      password = "password"
    }
  }
}


local.file_match "laravel_logs" {
  path_targets = [
    { __path__ = "/var/www/*/storage/logs/*.log" },
  ]
}

loki.source.file "laravel" {
  targets    = local.file_match.laravel_logs.targets
  forward_to = [loki.write.default.receiver]
}

loki.write "default" {
  endpoint {
    url = "https://loki.example.com/loki/api/v1/push"
    basic_auth {
      username = "username"
      password = "password"
    }
  }
}

This isn't the most polished configuration — I mostly expanded the default one — but it gave me both metrics and logs flowing into Grafana. Feel free to use it as a starting template and refine it for your own needs.

Don't forget to restart the service for changes to take effect:

sudo systemctl restart alloy


Wrapping Up

With all of that in place, I had a fully working monitoring stack — metrics in Mimir, logs in Loki, and everything visualized in Grafana. The only thing left was to build some dashboards and start keeping an eye on my servers.

Thank you for reading, and let's connect!

Contact

Let's Connect

Whether you want to discuss a project, talk about the latest in web development, or just say hello — I'd love to hear from you.

Send me an email[email protected]