


[{"content":" In this page you will find my articles. # ","date":"20 February 2026","externalUrl":null,"permalink":"/posts/","section":"articles","summary":"In this page you will find my articles. # ","title":"articles","type":"posts"},{"content":"","date":"20 February 2026","externalUrl":null,"permalink":"/tags/cgroups/","section":"Tags","summary":"","title":"Cgroups","type":"tags"},{"content":"This article dives into how containers actually run on GNU/Linux. Not how to use Docker — but what happens underneath when you type docker run: how the kernel isolates processes with namespaces, controls their resources with cgroups, and why understanding this changes the way you troubleshoot, optimize, and secure your systems.\nThis article starts from the ground up. If you\u0026rsquo;re already familiar with the basics of containerization, feel free to skip ahead to the Namespaces section. From monoliths to containers # this isn\u0026rsquo;t a strict timeline — it\u0026rsquo;s the evolution of an idea Monolithic app 60\u0026#39; - 2000 One codebase, one deployment, one server. Everything runs together. Microservices ~2012 The monolith breaks apart. Independent services, independent teams, independent deployments. Virtual machines ~2008 - 2010 Each service gets its own OS. Full isolation, but heavy — minutes to boot, gigabytes of overhead. Containers ~2013 Same kernel, isolated processes. Milliseconds to start, megabytes of overhead. Namespaces + cgroups under the hood. The monolithic era # For decades, applications were built as a single, large unit — a monolith. One codebase, one deployment, one process. Your e-commerce platform? That was one application handling user authentication, product catalog, shopping cart, payment processing, order management, and email notifications. All compiled together, all deployed together, all running in the same process on the same server. This worked. Until it didn\u0026rsquo;t. When your product catalog needed more computing power because of a sale event, you couldn\u0026rsquo;t scale just the catalog. You had to scale the entire application — deploy the whole thing on a bigger server or duplicate everything on multiple servers, even though 90% of your code didn\u0026rsquo;t need extra resources. When a developer pushed a bug in the notification system, the entire application went down — including payments. When the team grew from 5 to 50 developers, everyone was working on the same codebase, stepping on each other\u0026rsquo;s toes, and a single deployment could take hours of coordination. The monolith became a bottleneck for both the infrastructure and the people building it.\nThe microservices answer # The idea behind microservices is simple: break the monolith into small, independent services. Each service does one thing, runs as its own process, communicates with others through APIs, and can be deployed independently. Your e-commerce platform becomes: an auth service, a catalog service, a cart service, a payment service, an order service, a notification service. The catalog team can deploy 10 times a day without touching payments. If the notification service crashes, people can still buy things. Need more capacity for the catalog during Black Friday? Scale just that service, not the whole system. But microservices created a new problem: instead of one application to deploy and manage, now you have 20. Or 100. Or 500. Each one needs its own environment, its own dependencies, its own runtime. The catalog runs Python 3.11, payments runs Java 17, notifications runs Node.js 20. How do you run all of these on the same servers without them conflicting with each other?\nVirtual machines: the first attempt # The initial answer was virtual machines. Run each service in its own VM with its own operating system. Complete isolation, problem solved. Except a VM is heavy. Each one runs a full operating system kernel, needs its own allocated RAM even if mostly unused, takes minutes to boot, and consumes disk space for an entire OS image. Running 50 microservices in 50 VMs means 50 copies of Linux eating your resources. You\u0026rsquo;re paying for 50 kernels when all you needed was 50 isolated processes.\nContainers: the lightweight answer # Containers took a different approach. Instead of emulating a full machine with its own kernel, what if you could just make a process think it\u0026rsquo;s alone on the system? Same kernel, shared with the host, but the process sees its own filesystem, its own network, its own process tree, and can only use the resources you allow. A container starts in milliseconds, not minutes. It uses megabytes, not gigabytes. You can run hundreds on a single machine. And the process inside has no idea it\u0026rsquo;s not running on a dedicated server. This is where Docker came in. Docker didn\u0026rsquo;t invent the underlying technology — Linux namespaces existed since 2002, cgroups since 2008. What Docker did was package it into a tool that made containers accessible. A Dockerfile to define your environment, docker build to create an image, docker run to launch it. Suddenly, any developer could containerize an application in minutes. But beneath all of this — Docker, Kubernetes, container orchestration — there are just two Linux kernel features doing the real work: namespaces for isolation and cgroups for resource control. That\u0026rsquo;s what the rest of this article is about.\nNamespaces and Cgroups # What are namespaces? # Namespaces are a Linux kernel feature that gives a process its own isolated view of the system. Instead of seeing everything — all processes, all network interfaces, all mount points — a process inside a namespace only sees what the kernel allows it to see. Nothing is virtualized, nothing is emulated. It\u0026rsquo;s just a filter on what already exists.\nTypes of namespaces # PID # Every process in Linux has a Process ID. Normally they all share the same numbering — PID 1 is init/systemd, and everything else counts up from there. A PID namespace gives a process its own independent process tree. The first process inside a new PID namespace becomes PID 1 in that namespace. This is important because PID 1 has a special role in Linux: it adopts orphaned processes and receives signals differently. Inside the namespace, this process is init. But from the outside, it\u0026rsquo;s just a regular process with a normal PID like 58421. This means a process inside a PID namespace can only see itself and its children. It has no idea that thousands of other processes exist on the host. If it runs ps aux, it sees a nearly empty system. If it tries to kill a PID outside its namespace, it can\u0026rsquo;t — that PID simply doesn\u0026rsquo;t exist in its world.\n# create a new PID namespace sudo unshare --pid --fork --mount-proc /bin/bash # inside the namespace ps aux # you\u0026#39;ll see only bash (PID 1) and ps (PID 2) echo $$ # output: 1 — you ARE PID 1 in this namespace # open another terminal and check ps aux | grep unshare # you\u0026#39;ll see the process with a normal PID like 58421 The --mount-proc flag is important: without it, ps reads /proc from the host and you\u0026rsquo;d still see everything. By remounting /proc, you tell the kernel to show only the processes visible inside this namespace. Real-world impact: this is why docker top shows different PIDs than what you see inside the container. Same process, two different PID trees.\nNET # By default, all processes share the same network stack: the same interfaces (eth0, lo), the same routing table, the same iptables rules, the same ports. A NET namespace gives a process its own completely independent network stack. A fresh NET namespace starts with nothing — not even loopback. Just a blank network. You have to set it up: create interfaces, assign IPs, configure routes. This is exactly what Docker does every time you start a container. The typical pattern is a veth pair — a virtual ethernet cable with two ends. One end goes inside the namespace, one stays outside. Traffic in, traffic out, like a physical cable connecting two machines, except it\u0026rsquo;s all inside the same kernel.\n# create a named network namespace sudo ip netns add test_ns # check: it has nothing sudo ip netns exec test_ns ip addr # only the loopback, and it\u0026#39;s DOWN # create a veth pair sudo ip link add veth0 type veth peer name veth1 # move one end into the namespace sudo ip link set veth1 netns test_ns # configure the host side sudo ip addr add 10.0.0.1/24 dev veth0 sudo ip link set veth0 up # configure the namespace side sudo ip netns exec test_ns ip addr add 10.0.0.2/24 dev veth1 sudo ip netns exec test_ns ip link set veth1 up sudo ip netns exec test_ns ip link set lo up # test connectivity sudo ip netns exec test_ns ping 10.0.0.1 # it works — two isolated network stacks talking through a virtual cable # cleanup sudo ip netns delete test_ns This is why containers can all bind to port 80 without conflicts — they each have their own NET namespace with their own port space. And this is exactly how Kubernetes networking starts: pods get their own NET namespace, then CNI plugins (Calico, Flannel, Cilium) wire them together.\nMNT # The MNT namespace isolates mount points. A process in its own MNT namespace can mount and unmount filesystems without affecting the host or other namespaces. This is the oldest namespace — it was the first one implemented in Linux 2.4.19 — and it\u0026rsquo;s what makes containers see their own root filesystem. When Docker starts a container, it creates a new MNT namespace and mounts the container image as the root filesystem. Inside, the process sees / as its image. Outside, the host filesystem is untouched.\n# create a new mount namespace sudo unshare --mount /bin/bash # mount a tmpfs somewhere — only visible inside this namespace mount -t tmpfs none /mnt echo \u0026#34;only I can see this\u0026#34; \u0026gt; /mnt/secret.txt # from another terminal on the host cat /mnt/secret.txt # No such file — the mount doesn\u0026#39;t exist in the host namespace Combined with PID namespace: the container sees its own filesystem AND only its own processes. The illusion of a separate machine is getting stronger.\nUTS # UTS stands for \u0026ldquo;Unix Time-Sharing\u0026rdquo; (historical name, don\u0026rsquo;t worry about it). It isolates the hostname and the domain name. That\u0026rsquo;s it.\nsudo unshare --uts /bin/bash hostname container-01 hostname # output: container-01 # from the host hostname # output: your-original-hostname — unchanged Simple but necessary. Every Docker container has its own hostname (usually the container ID). Without UTS namespace, changing the hostname would affect the entire host.\nUSER # This is the most powerful and the most security-critical one. A USER namespace remaps user and group IDs. A process can be root (UID 0) inside the namespace but mapped to an unprivileged user (say UID 100000) outside. This is the foundation of rootless containers. The application inside the container thinks it\u0026rsquo;s root and can do root things within its namespace — install packages, bind to port 80, change file ownership. But on the host, it\u0026rsquo;s running as a regular user with no elevated privileges. If the process escapes the container, it\u0026rsquo;s nobody.\n# create a user namespace (no sudo needed!) unshare --user --map-root-user /bin/bash whoami # output: root id # output: uid=0(root) gid=0(root) # but from the host, this process runs as your regular user # it CANNOT actually do privileged operations on the host The --map-root-user flag maps your UID outside to UID 0 inside. The kernel tracks the mapping in /proc/\u0026lt;PID\u0026gt;/uid_map and /proc/\u0026lt;PID\u0026gt;/gid_map. This is why Podman\u0026rsquo;s rootless mode works: it creates a USER namespace where the container process is root inside but your regular user outside. No daemon running as root needed, unlike traditional Docker.\nIPC # IPC stands for Inter-Process Communication. This namespace isolates shared memory segments, semaphores, and message queues. Processes in different IPC namespaces can\u0026rsquo;t communicate through these mechanisms. Without this isolation, a process in one container could read shared memory created by a process in another container. Not great for security. This one is straightforward and rarely something you configure directly. Docker and Kubernetes set it up automatically. You mostly need to know it exists and why: it prevents cross-container data leaks through shared memory.\nCGROUP # The cgroup namespace virtualizes the view of /sys/fs/cgroup. A process inside a cgroup namespace sees its own cgroup as the root. It doesn\u0026rsquo;t know it\u0026rsquo;s actually nested under /sys/fs/cgroup/docker/abc123\u0026hellip; on the host. Without it, a process inside a container could read /sys/fs/cgroup and see the entire host\u0026rsquo;s cgroup hierarchy — names of other containers, resource allocations, everything. Not a security risk per se (it\u0026rsquo;s read-only by default) but it\u0026rsquo;s information leakage and it breaks the abstraction.\nTIME # Added in kernel 5.6 (2020), this is the newest one. It isolates CLOCK_MONOTONIC and CLOCK_BOOTTIME. The main use case: container migration. If you live-migrate a container from a host that has been running for 200 days to a fresh host with 2 days of uptime, CLOCK_BOOTTIME would suddenly jump backward. The time namespace lets the container keep its own boottime offset, so applications inside don\u0026rsquo;t break. This doesn\u0026rsquo;t affect wall clock time (CLOCK_REALTIME) — that\u0026rsquo;s still shared with the host.\nIsolation is not enough # Namespaces give a process its own view of the system. But a view is all they control. Nothing stops a process inside a PID namespace from allocating 64GB of RAM, pinning all CPU cores to 100%, or writing to disk so aggressively that every other process on the host grinds to a halt. In the hosting provider scenario: you\u0026rsquo;ve isolated your 100 customers with namespaces. Customer A can\u0026rsquo;t see customer B\u0026rsquo;s processes anymore. Great. But customer A\u0026rsquo;s runaway script is now consuming all available memory, and the kernel\u0026rsquo;s OOM killer starts terminating customer B\u0026rsquo;s processes to free up resources. Isolation without resource control is only half the solution.\nWhat are cgroups? # Cgroups (control groups) are a kernel mechanism that limits, accounts, and isolates resource usage of a group of processes. While namespaces answer \u0026ldquo;what can a process see?\u0026rdquo;, cgroups answer \u0026ldquo;how much can a process use?\u0026rdquo; A cgroup is simply a directory in a virtual filesystem. You create a group, assign resource limits by writing values into files, and then add processes to that group. The kernel enforces the limits. That\u0026rsquo;s it — no daemons, no services, just files and directories.\nCgroups v1 vs v2 # There are two versions and this causes confusion, so let\u0026rsquo;s clear it up. Cgroups v1 (2008) organized resources into separate hierarchies — one for CPU, one for memory, one for I/O, and so on. Each hierarchy was independent. This created a mess: a process could be in one CPU group but a different memory group, policies conflicted, and the interaction between controllers was unpredictable.\n# v1 layout — separate trees per resource /sys/fs/cgroup/cpu/ /sys/fs/cgroup/memory/ /sys/fs/cgroup/blkio/ /sys/fs/cgroup/pids/ Cgroups v2 (2016, default since kernel 5.8) uses a single unified hierarchy. One tree, all controllers together. A process is in one group and all resource limits apply to that group. Much cleaner, much easier to reason about.\n# v2 layout — single tree, all controllers /sys/fs/cgroup/ /sys/fs/cgroup/my_group/ /sys/fs/cgroup/my_group/cpu.max /sys/fs/cgroup/my_group/memory.max /sys/fs/cgroup/my_group/io.max Today you should be using v2. Docker, Kubernetes, and systemd all support it. To check what your system uses:\n# if this directory structure exists, you\u0026#39;re on v2 mount | grep cgroup2 # or check cat /proc/filesystems | grep cgroup The resource controllers # Each controller manages one type of resource:\nCpu # how much CPU time a group can use. Controlled through cpu.max. The format is $QUOTA $PERIOD in microseconds. \u0026ldquo;50000 100000\u0026rdquo; means: out of every 100ms, this group can use at most 50ms — effectively 50% of one core.\n# limit to 50% of one core echo \u0026#34;50000 100000\u0026#34; \u0026gt; /sys/fs/cgroup/my_group/cpu.max # limit to 2 full cores echo \u0026#34;200000 100000\u0026#34; \u0026gt; /sys/fs/cgroup/my_group/cpu.max # no limit echo \u0026#34;max 100000\u0026#34; \u0026gt; /sys/fs/cgroup/my_group/cpu.max Memory # hard limit on RAM usage. When a process exceeds it, the kernel\u0026rsquo;s OOM killer is invoked on that group only — it won\u0026rsquo;t touch processes outside the group.\n# limit to 512MB echo \u0026#34;536870912\u0026#34; \u0026gt; /sys/fs/cgroup/my_group/memory.max # check current usage cat /sys/fs/cgroup/my_group/memory.current IO # limits disk read/write bandwidth and IOPS per device. You need the device major:minor number.\n# find your disk\u0026#39;s major:minor lsblk -o NAME,MAJ:MIN # example: sda 8:0 # limit to 10MB/s write echo \u0026#34;8:0 wbps=10485760\u0026#34; \u0026gt; /sys/fs/cgroup/my_group/io.max PID # limits the number of processes a group can create. Simple but critical: without it, a fork bomb inside a container takes down the entire host.\n# max 100 processes echo \u0026#34;100\u0026#34; \u0026gt; /sys/fs/cgroup/my_group/pids.max Hands-on: building a cgroup from scratch # Let\u0026rsquo;s create a cgroup, set limits, and see them enforced:\n# create the group sudo mkdir /sys/fs/cgroup/demo # enable controllers (may be needed depending on your system) cat /sys/fs/cgroup/cgroup.controllers # output: cpuset cpu io memory hugetlb pids rdma misc # set a memory limit of 50MB echo \u0026#34;52428800\u0026#34; | sudo tee /sys/fs/cgroup/demo/memory.max # set a CPU limit of 20% of one core echo \u0026#34;20000 100000\u0026#34; | sudo tee /sys/fs/cgroup/demo/cpu.max # set max 20 processes echo \u0026#34;20\u0026#34; | sudo tee /sys/fs/cgroup/demo/pids.max # add your current shell to the group echo $$ | sudo tee /sys/fs/cgroup/demo/cgroup.procs # now everything you run from this shell is limited # test memory limit: python3 -c \u0026#34; data = [] while True: data.append(\u0026#39;A\u0026#39; * 1024 * 1024) # allocate 1MB chunks \u0026#34; # this will get killed when it hits 50MB — OOM inside the cgroup only # test PID limit: for i in $(seq 1 25); do sleep 100 \u0026amp; done # after ~20 you\u0026#39;ll see: fork: retry: Resource temporarily unavailable # check stats cat /sys/fs/cgroup/demo/memory.current cat /sys/fs/cgroup/demo/pids.current # cleanup: move yourself out, then remove the group echo $$ | sudo tee /sys/fs/cgroup/cgroup.procs sudo rmdir /sys/fs/cgroup/demo How docker uses cgroups # When you run docker run --memory=512m --cpus=2 nginx, Docker: Creates a cgroup (usually under /sys/fs/cgroup/system.slice/docker-\u0026lt;container_id\u0026gt;.scope/) Writes 536870912 to memory.max Writes 200000 100000 to cpu.max Adds the container\u0026rsquo;s main process to cgroup.procs That\u0026rsquo;s all. No magic. You can verify it yourself:\n# start a limited container docker run -d --memory=256m --cpus=0.5 --name test nginx # find its cgroup CONTAINER_ID=$(docker inspect test --format \u0026#39;{{.Id}}\u0026#39;) cat /sys/fs/cgroup/system.slice/docker-${CONTAINER_ID}.scope/memory.max # output: 268435456 (256MB in bytes) cat /sys/fs/cgroup/system.slice/docker-${CONTAINER_ID}.scope/cpu.max # output: 50000 100000 (50% of one core) docker rm -f test Summary: # Namespaces + Cgroups = Containers # This is the closing section: A container is not a thing. There is no \u0026ldquo;container\u0026rdquo; object in the Linux kernel. What we call a container is a process running with:\nPID namespace, so it only sees its own processes NET namespace, so it has its own network stack MNT namespace, so it has its own filesystem UTS namespace, so it has its own hostname USER namespace, so root inside isn\u0026rsquo;t root outside IPC and cgroup namespaces to complete the isolation cgroup, so it can only use the resources you allow Run docker run nginx and Docker creates all of these in milliseconds. The nginx process doesn\u0026rsquo;t know it\u0026rsquo;s in a container. It thinks it\u0026rsquo;s alone on a machine. But the host kernel knows exactly what\u0026rsquo;s happening and keeps everything under control. Understanding this changes how you troubleshoot. Container eating too much memory? Check its cgroup. Network not working? Inspect the NET namespace. Process can\u0026rsquo;t see a file? Check the MNT namespace. The mystery disappears when you know what\u0026rsquo;s underneath. ","date":"20 February 2026","externalUrl":null,"permalink":"/posts/docker-under-the-hood/","section":"articles","summary":"A deep dive into how containers actually work on GNU/Linux — from namespaces to cgroups.","title":"Containers under the hood","type":"posts"},{"content":"","date":"20 February 2026","externalUrl":null,"permalink":"/tags/docker/","section":"Tags","summary":"","title":"Docker","type":"tags"},{"content":"","date":"20 February 2026","externalUrl":null,"permalink":"/tags/kernel/","section":"Tags","summary":"","title":"Kernel","type":"tags"},{"content":"","date":"20 February 2026","externalUrl":null,"permalink":"/tags/linux/","section":"Tags","summary":"","title":"Linux","type":"tags"},{"content":"","date":"20 February 2026","externalUrl":null,"permalink":"/tags/namespaces/","section":"Tags","summary":"","title":"Namespaces","type":"tags"},{"content":"","date":"20 February 2026","externalUrl":null,"permalink":"/tags/nginx/","section":"Tags","summary":"","title":"Nginx","type":"tags"},{"content":"","date":"20 February 2026","externalUrl":null,"permalink":"/tags/reverse-proxy/","section":"Tags","summary":"","title":"Reverse-Proxy","type":"tags"},{"content":"","date":"20 February 2026","externalUrl":null,"permalink":"/tags/security/","section":"Tags","summary":"","title":"Security","type":"tags"},{"content":" VAULTWARDEN INSTALLATION # A complete guide to self-hosting Vaultwarden — a lightweight, open-source password manager compatible with Bitwarden clients — using Docker rootless for better security isolation. This setup includes a dedicated system user, Nginx reverse proxy with SSL, Argon2 hashed admin token, firewall hardening with UFW and portainer agent for checking container status.\nVaultwarden\nWhy self-host a password manager? # Cloud-based solutions like 1Password or Bitwarden\u0026rsquo;s hosted service store your credentials on third-party servers — servers you don\u0026rsquo;t control. By self-hosting Vaultwarden, your encrypted vault stays entirely on your own hardware, accessible only through your local network or VPN. You get full control over your data, no subscription fees, and the peace of mind that your passwords never leave your infrastructure.\nPrerequisites: # systemd nginx docker-rootless portainer openssl ufw argon2 wireguard (Optional) Create a dedicated user and enable linger # By default, systemd kills all user processes on logout. Enabling linger keeps the user\u0026rsquo;s services running in the background even without an active session — essential for Docker rootless containers to stay up. After you enable linger you have to watch content of /run/user/USER_ID, this checks that the user\u0026rsquo;s runtime directory exists, which confirms that linger is active and systemd has started the user session. This directory contains essential runtime resources like the D-Bus socket and the Docker rootless socket. If it doesn\u0026rsquo;t exist, Docker rootless won\u0026rsquo;t be able to start.\n# add user sudo useradd -r -m -s /usr/sbin/nologin -d /home/vaultwarden vaultwarden # enable linger sudo loginctl enable-linger vaultwarden # check ls /run/user/$(id -u vaultwarden) Create directories # Directory data will contain everything. Directory portainer-agent will contain agent.\nsudo -u vaultwarden mkdir -p /home/vaultwarden/vaultwarden/data sudo -u vaultwarden mkdir -p /home/vaultwarden/vaultwarden/portainer-agent # Final stucture sudo tree -a /home/vaultwarden/vaultwarden/ /home/vaultwarden/vaultwarden ├── data │ ├── db.sqlite3 │ ├── db.sqlite3-shm │ ├── db.sqlite3-wal │ ├── rsa_key.pem │ └── tmp ├── docker-compose.yml ├── .env └── portainer-agent └── docker-compose.yml 4 directories, 7 files Create token and password for admin panel # Create /home/vaultwarden/vaultwarden/.env and ADMIN_TOKEN. Using Argon2, you can hash the admin password so it\u0026rsquo;s never stored in plain text.\nArgon2 flags explained argon2: the hashing tool \u0026quot;$(openssl rand -base64 32)\u0026quot;: generates a random 32-byte salt encoded in base64 -e: output the hash in encoded format (PHC string) -id: use the Argon2id variant (combines Argon2i and Argon2d for better security) -k 65540: memory cost in KiB (~64MB of RAM used during hashing) -t 3: time cost (3 iterations) -p 4: parallelism (4 threads) # token creation ADMIN_TOKEN=$(echo -n \u0026#34;your_admin_password\u0026#34; | argon2 \u0026#34;$(openssl rand -base64 32)\u0026#34; -e -id -k 65540 -t 3 -p 4) printf \u0026#34;ADMIN_TOKEN=\u0026#39;%s\u0026#39;\\n\u0026#34; \u0026#34;$ADMIN_TOKEN\u0026#34; | sudo -u vaultwarden tee /home/vaultwarden/vaultwarden/.env # permissions settings sudo chmod 600 /home/vaultwarden/vaultwarden/.env After you set up nginx navigate to https://IP_ADDRESS:4080/admin and enter the password you used to generate the Argon2 hash. Create docker compose with these settings # Put SIGNUPS_ALLOWED=false after registration otherwise anyone who reaches your instance can register. Put DOMAIN to /home/vaultwarden/vaultwarden/.env file after ADMIN_TOKEN.\n# docker-compose.yml services: vaultwarden: image: vaultwarden/server:latest container_name: vaultwarden restart: unless-stopped security_opt: - no-new-privileges:true ports: - \u0026#34;5080:80\u0026#34; volumes: - ./data:/data env_file: - .env environment: - SIGNUPS_ALLOWED=true - LOG_LEVEL=warn - SENDS_ALLOWED=true - EMERGENCY_ACCESS_ALLOWED=true - WEB_VAULT_ENABLED=true - SHOW_PASSWORD_HINT=false - INVITATIONS_ALLOWED=false deploy: resources: limits: memory: 512M cpus: \u0026#34;1.0\u0026#34; healthcheck: test: [\u0026#34;CMD\u0026#34;, \u0026#34;curl\u0026#34;, \u0026#34;-f\u0026#34;, \u0026#34;http://localhost:80\u0026#34;] interval: 30s timeout: 10s retries: 3 start_period: 10s logging: driver: json-file options: max-size: \u0026#34;10m\u0026#34; max-file: \u0026#34;3\u0026#34; Configure SUBUID and SUBGID # Docker rootless uses user namespaces to map container UIDs/GIDs to unprivileged ranges on the host. The /etc/subuid and /etc/subgid files define which UID/GID ranges each user is allowed to use. Without these entries, the container can\u0026rsquo;t create isolated users internally and will fail to start.\n# write to file /etc/subuid and /etc/subgid sudo usermod --add-subuids 200000-265535 --add-subgids 200000-265535 vaultwarden # check if everything works grep vaultwarden /etc/subuid /etc/subgid Start docker service # Start docker and check if it runs.\n# start sudo -u vaultwarden XDG_RUNTIME_DIR=/run/user/$(id -u vaultwarden) systemctl start docker # check sudo -u vaultwarden XDG_RUNTIME_DIR=/run/user/$(id -u vaultwarden) systemctl status docker Start docker compose # # launch docker as user vaultwarden and recreate sudo -u vaultwarden -H /bin/bash -lc \u0026#39;\\\u0026#39;\u0026#39; export DOCKER_HOST=unix:///run/user/$(id -u vaultwarden)/docker.sock cd /home/vaultwarden/vaultwarden/ docker compose up -d --force-recreate\u0026#39; Generate certificates # Generate auto-signed certificates for encrypted communication with openssl command.\nOpenSSL flags explained -x509: generate a self-signed certificate instead of a certificate signing request -nodes: don\u0026rsquo;t encrypt the private key with a passphrase -days 3650: certificate validity (10 years) -newkey rsa:2048: create a new 2048-bit RSA private key -keyout: path for the private key file -out: path for the certificate file -subj \u0026quot;/CN=...\u0026quot;: set the Common Name without interactive prompts -addext \u0026quot;subjectAltName=...\u0026quot;: add SANs (IP addresses and DNS names) so clients accept the certificate when connecting by IP or hostname sudo mkdir -p /etc/nginx/ssl # gen certificate sudo openssl req -x509 -nodes -days 3650 -newkey rsa:2048 \\ -keyout /etc/nginx/ssl/vaultwarden.key \\ -out /etc/nginx/ssl/vaultwarden.crt \\ -subj \u0026#34;/CN=__DOMAIN_NAME__\u0026#34; \\ -addext \u0026#34;subjectAltName=IP:__IP_ADDRESS__,DNS:__DOMAIN_NAME__\u0026#34; Configure nginx reverse proxy # Create file /etc/nginx/sites-available/vaultwarden and make symlink to /etc/nginx/sites-enabled/ dir. This lets you disable a site by simply removing the symlink, without deleting the actual configuration file.\nnginx config explained Default server block:\nlisten 4080 ssl default_server: catches all requests that don\u0026rsquo;t match any configured server name. Acts as a fallback. return 444: special nginx code that closes the connection immediately without sending any response. Drops unknown or malicious requests silently. Main server block:\nlisten 4080 ssl: listens on port 4080 with SSL enabled. server_name: defines which hostnames this server block responds to. TLS hardening:\nssl_protocols TLSv1.2 TLSv1.3: only accepts TLS 1.2 and 1.3. Older versions (1.0, 1.1) have known vulnerabilities and are deprecated. ssl_ciphers HIGH:!aNULL:!MD5:!RC4: uses only strong ciphers. Excludes those without authentication (aNULL), and broken algorithms (MD5, RC4). ssl_prefer_server_ciphers on: the server chooses the cipher, not the client. Prevents a compromised client from forcing a weak cipher. ssl_session_cache shared:SSL:10m: shared TLS session cache across nginx workers (10 MB ≈ 40,000 sessions). Avoids renegotiating TLS on every request. ssl_session_timeout 10m: cached TLS sessions expire after 10 minutes. Security headers:\nX-Content-Type-Options \u0026quot;nosniff\u0026quot;: prevents the browser from guessing the MIME type of a file. Without this, a browser could interpret a text file as HTML and execute malicious JavaScript. X-Frame-Options \u0026quot;SAMEORIGIN\u0026quot;: the site can only be loaded in an iframe by itself. Blocks clickjacking attacks. X-XSS-Protection \u0026quot;0\u0026quot;: disables the legacy browser XSS filter, which could actually introduce new vulnerabilities. Modern browsers don\u0026rsquo;t need it. Referrer-Policy \u0026quot;strict-origin-when-cross-origin\u0026quot;: when navigating to an external site, the browser sends only the origin (e.g. https://yourdomain.com), not the full URL path. Prevents leaking sensitive information. Strict-Transport-Security \u0026quot;max-age=31536000; includeSubDomains\u0026quot;: HSTS — tells the browser to only connect via HTTPS for the next 365 days, even if the user types http://. Prevents downgrade attacks. always: ensures the header is sent on all responses, including error pages (403, 500, etc.). Other directives:\nclient_max_body_size 525M: maximum allowed size for request body. Needed for Vaultwarden file attachments. proxy_pass: forwards requests to the Vaultwarden container on port 5080. proxy_set_header Host $host: preserves the original hostname. proxy_set_header X-Real-IP $remote_addr: passes the real client IP to the backend (otherwise Vaultwarden would always see 127.0.0.1). proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for: passes the full proxy chain. proxy_set_header X-Forwarded-Proto $scheme: tells the backend whether the original connection was HTTP or HTTPS. proxy_set_header Upgrade $http_upgrade and Connection \u0026quot;upgrade\u0026quot;: required for WebSocket connections. Vaultwarden uses WebSocket for real-time sync notifications between clients. # write config sudo tee /etc/nginx/sites-available/vaultwarden \u0026lt;\u0026lt;\u0026#39;EOF\u0026#39; server { listen 4080 ssl default_server; ssl_certificate /etc/nginx/ssl/vaultwarden.crt; ssl_certificate_key /etc/nginx/ssl/vaultwarden.key; return 444; } server { listen 4080 ssl; server_name __IP_ADDRESS__ __DOMAIN_NAME__; ssl_certificate /etc/nginx/ssl/vaultwarden.crt; ssl_certificate_key /etc/nginx/ssl/vaultwarden.key; ssl_protocols TLSv1.2 TLSv1.3; ssl_ciphers HIGH:!aNULL:!MD5:!RC4; ssl_prefer_server_ciphers on; ssl_session_cache shared:SSL:10m; ssl_session_timeout 10m; add_header X-Content-Type-Options \u0026#34;nosniff\u0026#34; always; add_header X-Frame-Options \u0026#34;SAMEORIGIN\u0026#34; always; add_header X-XSS-Protection \u0026#34;0\u0026#34; always; add_header Referrer-Policy \u0026#34;strict-origin-when-cross-origin\u0026#34; always; add_header Strict-Transport-Security \u0026#34;max-age=31536000; includeSubDomains\u0026#34; always; client_max_body_size 525M; location / { proxy_pass http://127.0.0.1:5080; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; } location /notifications/hub { proxy_pass http://127.0.0.1:5080; proxy_set_header Host $host; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection \u0026#34;upgrade\u0026#34;; } } EOF # make symlink sudo ln -s /etc/nginx/sites-available/vaultwarden /etc/nginx/sites-enabled/vaultwarden Start nginx service # Run nginx and see if it\u0026rsquo;s running.\n# check syntax sudo nginx -t # check if it\u0026#39;s already running systemctl status nginx # if it\u0026#39;s running reload sudo systemctl reload nginx # if it\u0026#39;s not running start nginx sudo systemctl enable --now nginx # see if nginx is running sudo systemctl status nginx # check possible errors sudo journalctl -u nginx -f Enable ufw Connection # If you are in lan run this command.\nsudo ufw allow in on __LAN_INTERFACE__ from __LAN_ADDRESS__/24 to any port 4080 proto tcp comment \u0026#34;Vaultarden from lan\u0026#34; Install certificate on client # Copy *.crt file and install on your client (IPhone/Mac/Android)\nsudo cp /etc/nginx/ssl/vaultwarden.crt /tmp/ cd /tmp \u0026amp;\u0026amp; python3 -m http.server 8080 On client go to http://__IP_SERVER__:8080/vaultwarden.crt and your client will download certificate. Install using settings of your client.\nApplication settings # Install Bitwarden on your client, open it and add https://__IP_ADDRESS__:4080. After insert mail and password you insert on server.\nDisable registration # Change SIGNUPS_ALLOWED=true to SIGNUPS_ALLOWED=false on docker-compose.yml and relaunch container with command.\nInstall agent # Create docker-compose.yml with a port is not used on your system.\n# check what port to map to agent if you just use other agent sudo ss -tulpn | grep -E \u0026#39;:9[0-9]{3}\u0026#39; # select port AGENT_PORT=\u0026#34;__AVAILABLE_PORT__\u0026#34; # content of docker-compose.yml VAULT_UID=$(id -u vaultwarden) sudo -u vaultwarden tee /home/vaultwarden/portainer-agent/docker-compose.yml \u0026lt;\u0026lt;EOF services: agent: image: portainer/agent:latest container_name: portainer-agent restart: unless-stopped security_opt: - no-new-privileges:true ports: - \u0026#34;${AGENT_PORT}:9001\u0026#34; volumes: - /run/user/${VAULT_UID}/docker.sock:/var/run/docker.sock - /home/vaultwarden/.local/share/docker/volumes:/var/lib/docker/volumes EOF sudo ufw allow from 127.0.0.1 to any port __AVAILABLE_PORT__ Launch portainer agent.\nsudo -u vaultwarden -H /bin/bash -lc \u0026#39;\\\u0026#39;\u0026#39; export DOCKER_HOST=unix:///run/user/$(id -u vaultwarden)/docker.sock cd /home/vaultwarden/vaultwarden/portainer-agent/ docker compose up -d --force-recreate\u0026#39; Go to the Portainer dashboard, navigate to Environments → Add environment, select Docker Standalone → Agent, and enter your server IP with the agent port (e.g. IP_ADDRESS:AVAILABLE_PORT) as the Environment URL.\nBackup # The ./data directory contains the SQLite database with all your vault entries. Schedule a regular backup to avoid data loss.\n# create backup directory sudo -u vaultwarden mkdir -p /home/vaultwarden/backups # edit vaultwarden crontab sudo crontab -u vaultwarden -e Add these lines to schedule a daily backup at 3 AM and auto-delete backups older than 30 days.\n# daily backup at 3 AM 0 3 * * * tar czf /home/vaultwarden/backups/vaultwarden-$(date +\\%Y\\%m\\%d).tar.gz -C /home/vaultwarden/vaultwarden data/ # delete backups older than 30 days 0 4 * * * find /home/vaultwarden/backups -name \\\u0026#34;*.tar.gz\\\u0026#34; -mtime +30 -delete Test backup restore # After setting up the cron backup, periodically verify that the database is not corrupted using the sqlite3 command.\n# test restore (run manually to verify backups work) sudo -u vaultwarden mkdir -p /tmp/vaultwarden-restore-test sudo -u vaultwarden tar xzf /home/vaultwarden/backups/vaultwarden-$(date +%Y%m%d).tar.gz \\ -C /tmp/vaultwarden-restore-test # check that the SQLite database is valid sqlite3 /tmp/vaultwarden-restore-test/data/db.sqlite3 \u0026#34;PRAGMA integrity_check;\u0026#34; # cleanup rm -rf /tmp/vaultwarden-restore-test Se integrity_check ritorna ok, il backup è valido e ripristinabile. Access Vaultwarden from outside your network with WireGuard # Since the Bitwarden client only accepts a single server URL, you need a way to reach your Vaultwarden instance both from your local network and from outside. If you already have a WireGuard VPN set up, this is straightforward. Server side — enable IP forwarding and masquerading. Your WireGuard clients need to reach the LAN subnet where Vaultwarden is running. Enable IP forwarding and configure NAT masquerading through iptables.\nEdit or create file /etc/sysctl.d/99-sysctl.conf.\n# allow ip forwarding sudo tee -a /etc/sysctl.d/99-sysctl.conf \u0026lt;\u0026lt;\u0026#39;EOF\u0026#39; net.ipv4.ip_forward=1 EOF # nat masquerading iptables -t nat -A POSTROUTING -s 10.0.0.0/24 -o \u0026lt;LAN_INTERFACE\u0026gt; -j MASQUERADE Iptables command explained How masquerading works with WireGuard When a WireGuard client (e.g. 10.0.0.2) wants to reach a device on your LAN (e.g. 192.168.1.x), the packet arrives at the server through the tunnel with a source IP of 10.0.0.2. The problem is that LAN devices have no route back to 10.0.0.0/24, so they don't know where to send the reply. The rule iptables -t nat -A POSTROUTING -s 10.0.0.0/24 -o eth0 -j MASQUERADE solves this by rewriting the source IP of outgoing packets from the WireGuard subnet to the server's LAN IP before they leave the eth0 interface. The LAN device sees the packet as coming from the server itself, sends the reply back to the server, and the server forwards it back through the tunnel to the VPN client. In short: masquerading makes the server act as a translator between the VPN subnet and the LAN, allowing two networks that don't know about each other to communicate. Client side — route LAN traffic through the tunnel. On your WireGuard client (phone, laptop, etc.), edit the tunnel configuration and add your LAN subnet to the allowed IPs of the peer: AllowedIPs = VPN_ADDRESS/24, LAN_ADDRESS/24 Replace LAN_ADDRESS/24 with your actual LAN subnet. This tells the client to route both VPN and LAN traffic through the WireGuard tunnel when connected. Set your Vaultwarden server URL to your server\u0026rsquo;s LAN IP: https://LAN_IP:4080 From home (LAN): the client reaches the server directly — no VPN needed. From outside: connect to WireGuard first — traffic to your LAN is routed through the tunnel and masqueraded by the server. One URL, works everywhere.\nConclusion # You now have a fully self-hosted password manager running in a security-hardened environment — Docker rootless, dedicated user, SSL encryption, Argon2 hashed admin token, and firewall rules. Your passwords never leave your network and you have full control over your data.\n","date":"20 February 2026","externalUrl":null,"permalink":"/posts/run-vaultwarden-locally/","section":"articles","summary":"How to self-host vaultwarden locally using docker-rootless and Nginx reverse proxy.","title":"Self-Host Vaultwarden","type":"posts"},{"content":"","date":"20 February 2026","externalUrl":null,"permalink":"/tags/systemd/","section":"Tags","summary":"","title":"Systemd","type":"tags"},{"content":"","date":"20 February 2026","externalUrl":null,"permalink":"/tags/","section":"Tags","summary":"","title":"Tags","type":"tags"},{"content":"","date":"20 February 2026","externalUrl":null,"permalink":"/tags/ufw/","section":"Tags","summary":"","title":"Ufw","type":"tags"},{"content":"Welcome to my blog:).\nI am an aspiring DevOps Engineer with a deep passion for the GNU/Linux ecosystem and the world of Technology.\nMy story: # My journey into the world of computer science began about 5 years ago.\nOne day, by chance, I stumbled upon a video about how to program a simple game in Python, and from that moment it was love at first sight for this world.\nI may not be the most skilled programmer or sysadmin you\u0026rsquo;ll meet, but I\u0026rsquo;m certainly passionate about this world.\nI truly love the world of open source and GNU/Linux, I\u0026rsquo;m a huge fan of bash scripting, and my biggest dream is to enter this world and make a contribution, even if small, to this amazing community.\nWhat you will find here: # Project Showcases: Deep dives into my latest DevOps/Sysadmin projects. [posts] GNU/Linux Commands Guides: Tips and tricks for terminal power users. [commands] Bash/Python/Lua/Javascript scripts: Multi-purpose Script. [scripts] ","date":"2 February 2026","externalUrl":null,"permalink":"/","section":"","summary":"About me.","title":"","type":"page"},{"content":"Benvenuto nel mio blog:).\nSono un giovane appasionato del mondo DevOps e del sistema operativo GNU/Linux ed in generale del mondo della Tecnologia/Informatica.\nLa mia storia: # II mio viaggio nel campo dell\u0026rsquo;informatica è iniziato circa 5 anni fa. Un giorno, per caso, mi sono imbattuto in un video su come programmare un semplice gioco in Python, e da quel momento è stato amore a prima vista. Potrei non essere il programmatore o l\u0026rsquo;amministratore di sistema più esperto che incontrerete, ma nutro una passione autentica. Amo profondamente il mondo dell\u0026rsquo;open source e di GNU/Linux, sono un grande fan dello scripting bash e il mio sogno più grande è dare un contributo, anche piccolo, a questa straordinaria comunità.\nCosa troverai sul mio Blog: # Articoli: Articoli che scriverò nel tempo libero. [articoli] GNU/Linux Commands Guides: Guide ai comandi principali di GNU/Linux/Unix. [commands] Bash/Python/Lua/Javascript scripts: Script personali. [scripts] ","date":"2 February 2026","externalUrl":null,"permalink":"/it/","section":"","summary":"Benvenuto nel mio blog:).\nSono un giovane appasionato del mondo DevOps e del sistema operativo GNU/Linux ed in generale del mondo della Tecnologia/Informatica.\nLa mia storia: # II mio viaggio nel campo dell’informatica è iniziato circa 5 anni fa. Un giorno, per caso, mi sono imbattuto in un video su come programmare un semplice gioco in Python, e da quel momento è stato amore a prima vista. Potrei non essere il programmatore o l’amministratore di sistema più esperto che incontrerete, ma nutro una passione autentica. Amo profondamente il mondo dell’open source e di GNU/Linux, sono un grande fan dello scripting bash e il mio sogno più grande è dare un contributo, anche piccolo, a questa straordinaria comunità.\n","title":"","type":"it"},{"content":" In questa pagina puoi trovare i miei articoli personali scritti da me. # ","externalUrl":null,"permalink":"/it/posts/","section":"","summary":"In questa pagina puoi trovare i miei articoli personali scritti da me. # ","title":"Articoli","type":"it"},{"content":"","externalUrl":null,"permalink":"/authors/","section":"Authors","summary":"","title":"Authors","type":"authors"},{"content":"","externalUrl":null,"permalink":"/categories/","section":"Categories","summary":"","title":"Categories","type":"categories"},{"content":" In this page you will find info about linux most used commands. # Remember to launch every command with -h/\u0026ndash;help or man (1) command to display more info about options you can use. ","externalUrl":null,"permalink":"/commands/","section":"Commands","summary":"In this page you will find info about linux most used commands. # Remember to launch every command with -h/–help or man (1) command to display more info about options you can use. ","title":"Commands","type":"commands"},{"content":" In questa pagina imparerai i comandi principali di GNU/Linux. # Ricordati di lanciate ogni comando con -h/\u0026ndash;help o man (1) command per avere maggiori informazioni sul funzionamento. ","externalUrl":null,"permalink":"/it/commands/","section":"","summary":"In questa pagina imparerai i comandi principali di GNU/Linux. # Ricordati di lanciate ogni comando con -h/–help o man (1) command per avere maggiori informazioni sul funzionamento. ","title":"Commands","type":"it"},{"content":" In questa pagina troverai i miei scripts. # Ricordati di lanciare chmod +x per ogni script, non lanciare mai gli scripts con sudo e leggi (sempre) prima il codice sorgente!. ","externalUrl":null,"permalink":"/it/scripts/","section":"","summary":"In questa pagina troverai i miei scripts. # Ricordati di lanciare chmod +x per ogni script, non lanciare mai gli scripts con sudo e leggi (sempre) prima il codice sorgente!. ","title":"Scripts","type":"it"},{"content":" In this page you will find info about my personal scripts. # Remember to chmod +x every script, don\u0026rsquo;t run script with sudo and watch source code and understand it before launch it. ","externalUrl":null,"permalink":"/scripts/","section":"Scripts","summary":"In this page you will find info about my personal scripts. # Remember to chmod +x every script, don’t run script with sudo and watch source code and understand it before launch it. ","title":"Scripts","type":"scripts"},{"content":"","externalUrl":null,"permalink":"/series/","section":"Series","summary":"","title":"Series","type":"series"}]