This document describes my setup for a simple home server running GNU/Linux (Ubuntu 18.04). The server is based on Intel NUC hardware and runs DNS server, NFS server, Nextcloud, SOCKS5 over VPN proxy server, Transmission (over VPN), Git server and some other services (see Server Overview for detailed list).
The document is always updated to reflect the current state of the software. The revision history can be found in changelog [1].
The document is hosted via GitHub Pages and the source is available at https://github.com/ovk/silverbox.
1. Introduction
1.1. Document Overview
Please note that this document is not an exhaustive guide on building a home server, that deeply explores all different options and possibilities, and explains every single step great in detail. It is more or less just a documentation for the setup I choose, and it can be opinionated at times. Whenever there was a choice, I leaned towards secure and simple solutions, rather than fancy and "feature-rich".
Following this guide is probably not the most efficient way to build home server, and you’d be better off finding some Ansible playbooks that will do everything for you, or even just a virtual machine image with all the services already configured. However, I think reading through this guide can be useful if you want to understand how everything works, especially if you are planning on running and maintaining your server for a long time.
While some decisions I had to make were influenced by the specific hardware I use (see Hardware section), where possible, this document tries to stay hardware agnostic. You don’t have to use the exact same hardware as I did (you can even do it all in a virtual machine). I just wouldn’t recommend using Raspberry Pi for performance reasons.
1.1.1. Required Knowledge
This document expects reader to have some GNU/Linux experience and at least some knowledge in the following areas:
-
Be comfortable in the GNU/Linux terminal and familiar with SSH.
-
Understand simple shell scripting (e.g.
sh
orbash
scripts). -
Be familiar with basic GNU/Linux command line utilities, such as:
sudo
,cp/mv/rm
,find
,grep
,sed
etc. -
Be familiar with
man
pages and be able to explore them on your own. -
Be at least somewhat familiar with Docker.
-
Be at least somewhat familiar with systemd.
The document doesn’t try to explain everything, as I believe it is simply impractical are a lot of good documentation already written. Instead, it provides references to the existing documentation where needed.
1.1.2. Structure
The document is split into few top-level chapters, where each chapter (with a few exceptions) represents a separate, standalone feature, for example: NFS Server, Nextcloud, Torrent Client etc. While it is possible to skip some chapters, some may have a dependency on another chapters.
The document is structured more or less in the order of the configuration.
1.1.3. Formatting and Naming Conventions
In this document, parts of the sentence that require extra attention are marked with bold font.
Inline system commands, arguments and file names are formatted with monospace
font.
Commands that need to be executed in a shell are formatted in monospace blocks. Command output is formatted as italic (if there is any output).
For example:
some-command --with --some --arguments example command output
When a file needs to be edited, the file content is formatted in a similar monospace block. However, in this case the block will also have header with a file name, indicating what file is edited:
File content goes here
By default, all parameters that are specific to the concrete setup are displayed as placeholders in a curly braces,
for example: {SERVER_IP_ADDR}
is what you should replace with your server IP address.
However, you can generate version of this document where all such placeholders replaced with the actual values you want,
more on this in next section.
There are also few blocks that are used to draw attention to a specific statement:
This is a note. |
This is some useful tip. |
This is very important point. |
This is a warning. |
In the document, the server itself is referred as either "the server" or "silverbox". [1] When discussing client(s) that communicate to the server, the client device is usually referred as "client PC", even though it could be laptop, tablet, smartphone or any other device.
1.2. Generating Custom Document
This document contains a lot of placeholder values that will have to be replaced with the actual values, specific to your setup. Some examples of such values are host names, IP addresses, subnets, usernames etc. It may be cumbersome to manually keep track of what needs to be replaced with what, especially when copying scripts or configuration files.
Fortunately, you can generate your own version of this document where all placeholders will be automatically replaced with the actual values that you want.
This can easily be done in three steps using Docker:
-
Get the source code of this document from Git:
git clone https://github.com/ovk/silverbox.git
-
Edit
silverbox/parameters.adoc
file and replace placeholder values with the values you want. -
Run disposable Docker container with Asciidoctor to compile the document to desired output:
-
For HTML output:
docker run -it --rm -v $(pwd)/silverbox:/documents asciidoctor/docker-asciidoctor asciidoctor silverbox-server.adoc
-
For PDF output:
docker run -it --rm -v $(pwd)/silverbox:/documents asciidoctor/docker-asciidoctor asciidoctor-pdf silverbox-server.adoc
-
This should produce output file (silverbox-server.html
or silverbox-server.pdf
) in the silverbox
directory,
where all the placeholders replaced with your values.
Now you can mostly just copy-paste code snippets without having to manually edit them first.
1.3. Getting Involved
If you find a typo or error in this document, or if you think that some part could be explained in more details, updated, or improved in any way - please open an issue at https://github.com/ovk/silverbox/issues or make a pull request.
1.4. License
This document is licensed under Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0).
For more details see:
2. Server Overview
2.1. Goals
Main goals that affected overall design of the server are:
- Simplicity
-
Where possible, this document prefers simple solutions, i.e. solutions that require less configuration, fewer components and minimize bloat. Simple solutions are often more secure (due to limited attack surface), easy to understand and maintain, easy to extend and use less resources.
- Stability
-
The server design is heavily focused on stability of configuration and setup. While there are many new "fancy" and "feature-rich" tools and programs that could be used, I find that vast majority of them have very short life span, minimal to no backward/forward compatibility and meaningful upgrade path, often very poor support and documentation, lack of security updates etc. Instead, the choices were made in favor of mature, stable, proven software, that doesn’t break compatibility every minor release.
- Security
-
Major considerations were given to security when designing the server setup. While I think the solution in general is secure enough for home use, I still would not recommend to keep any sensitive information on the server.
- Low Maintenance
-
Another major focus was keeping the server maintenance as small as possible. This goes hand in hand with stability, and also relies on tools and automation to keep server maintenance as minimal as possible.
It is important to mention that this server is not a:
- NAS
-
It is not intended to be used for safe storage of massive amounts of data (at least not in the described configuration).
- Media Server
-
It is not running anything related to the media server, no Kodi, not even X server. Although, turning it into a media server is definitely possible (I know of a person who added containerized Kodi on top of this guide and took advantage of NUCs infrared port).
- Proprietary Server
-
It is not running any proprietary, closed-source solutions, even if they are free (free as in beer). So you won’t find anything like Plex or Mineraft server here, but you could definitely add these on your own if you wish.
2.2. High Level Overview
The diagram below shows the server place in the home network:
LAN --------------------------------- -------- | ----------- | WAN | WiFi | | ----------- | Client PC | | <------->| Router |<----->| | Silverbox | ----------- | -------- | ----------- ----------- | - DHCP | - DNS | Client PC | | | - ... ----------- | | ----------- | | | ..... | | | ----------- | ---------------------------------
The server is on the Local Area Network (together with regular clients such as PCs, smartphones etc.) and it acts as DNS server (apart from all the other services and programs it runs). It is separated from the internet by the router (and thus sits behind NAT).
Of course, this is just one of the options (but probably one of the most common ones) and it can be adjusted to suit your needs.
In my case, all the clients are also Linux-based. This is not a requirement, and you may have clients running Windows, MacOS or other OS, but in such case client configuration will obviously be different. In some parts of this document it is assumed that your client is x86 64-bit PC running Desktop Ubuntu Linux 18.04. |
2.3. Software and Services
This is the list of services running on the server that are described in this document:
-
Unbound as a forwarding DNS server that forwards queries to the DNS server of your choice and uses DNS-over-TLS and DNSSEC for extra security and privacy.
-
NFS server secured with Kerberos (clean NFSv4-only server).
-
Nextcloud accessible over HTTPS with Let’s Encrypt certificates (renewed automatically using Certbot with DNS challenge).
-
Transmission BitTorent client that communicates only over a VPN connection.
-
SOCKS5 proxy server that proxies traffic securely over a VPN connection.
-
Git server for hosting Git repositories.
-
Borg and Rclone for automatic encrypted incremental backups (both on-site and off-site).
-
Reverse proxy server with HTTPS (using wildcard certificate) and basic authentication to access internal services.
-
Firefly III for personal finances management.
-
Monit for system monitoring and notifications.
-
Script to automatically update DNS record pointing to server’s public IP address (in case of dynamic IP).
The server also runs:
-
SSH server.
-
Docker engine (as most of the workloads are run as containers).
2.4. Hardware
Originally, the server was placed on a shelf inside a small apartment, so the main criteria for the hardware were low noise (i.e. no fans, HDD or other moving parts), nice design (so no open PCBs or wires) and reasonable power consumption for 24/7 operation.
Below is the list of hardware that was originally used for the server (just for the reference).
2.4.1. Computer
Intel NUC (Next Unit of Computing) BOXNUC6CAYH Barebone Systems.
It uses Intel® Celeron® CPU J3455 (4 cores, up to 2.3 GHz).
2.4.2. Storage
Main disk: WD Blue 500Gb Sata SSD.
External backup disk: Samsung T5 500Gb Portable SSD (USB 3.1 Gen2).
2.4.3. Memory
8GB 1600mhz DDR3L SODIMM KVR16LS11/8.
2.4.4. UPS
While strictly speaking this is not a part of the server, and technically not required, UPS is highly recommended.
The UPS that was used: Eaton 3S US550VA 5-15P (8) 5-15R 4-UPS.
When choosing UPS, make sure you buy one that has decent Linux support. Compatibility with Network UPS Tools can be checked on the NUT hardware compatibility list [2]. |
2.4.5. Additional Hardware
You will also need a monitor, USB keyboard, patch cord and USB flash drive for the OS installation and initial configuration. Later on, you will probably want to have a separate keyboard attached to the server, to type LUKS password after reboots.
2.4.6. Power Consumption
Since the server doesn’t have power-hungry devices attached (such as HDDs or monitors), it is fairly low power. I didn’t measure power consumption precisely, but based on the rate of UPS battery discharge my guess was around 8-12W at idle. However, as reported by a Reddit user with a similar hardware, the power consumption at idle is around 6W.
3. Basic Configuration
3.1. OS Installation
Ubuntu Server 18.04 was chosen as an operating system for the server. The reasoning behind this choice is that it is pretty mature and stable server OS, with lot of software and documentation available due to its popularity.
The Ubuntu installation itself is pretty straightforward, and there are many excellent guides available, so it is not described in details in this document. Only some important points described here.
At the moment of writing, the default Ubuntu image was using new installer that was missing quite a few important features, including full disk encryption. As a workaround, don’t download default Ubuntu Server installer, and instead follow to the releases page and download "Alternative Ubuntu Server installer". This installer should have full support for LVM and disk encryption. Hopefully, the new installer will be updated eventually and missing features will be added. It’s a good idea to first try it on the virtual machine before installing on the real server. |
While internet is not required during installation, it makes it a bit easier and more convenient. So make sure you plug the server to the working internet connection. |
Roughly, the installation process looks like this:
-
Create a bootable USB flash drive with Ubuntu installer image.
-
Connect server to power, monitor, network, keyboard, insert USB stick and power it on. Ubuntu installation will begin.
-
Partition disk manually according to your needs. For example, the following partitioning scheme was used:
-
Bootable EFI system partition
-
2GB
ext4
partition for/boot
(to have some extra space for old kernels) -
dm-crypt
partition with LVM on it in the following configuration:-
One volume group (VG) with:
-
Root logical volume (LV) for
/
withext4
file system -
Swap logical volume (LV). The size needs to be greater than size of RAM if you need hibernation support.
-
-
-
Some unallocated (free) space to (hopefully) prolong life of SSD.
-
-
Make sure you enable automatic security updates during installation.
No additional software packages were chosen during installation.
3.2. Post-Installation Configuration
3.2.1. Set Correct Timezone
If for whatever reason the correct time zone was not set during installation, set it now with:
sudo timedatectl set-timezone {YOUR_TIMEZONE}
Where {YOUR_TIMEZONE}
is your desired timezone (for example, Europe/Athens
).
The list of available time zones can be obtained with:
timedatectl list-timezones
3.2.2. Disable Unused Hardware
Optionally, you can disable hardware that you are not planning to use for security, privacy, boot speed, power saving and other reasons. Some examples of what you can disable below.
WiFi
Wireless adapter can be easily disabled in BIOS.
After disabling wireless adapter your wired adapter name will likely change
(due to the way Linux enumerates devices).
In this case, network connectivity can be fixed by editing the file /etc/netplan/01-netcfg.yaml
and updating wired interface name there.
|
Bluetooth
Disabling Bluetooth adapter wasn’t possible with default NUC BIOS and required BIOS update. After the update Bluetooth can be disabled in the bios.
Digital Microphone
Microphone can be disabled in the BIOS as well.
3.2.3. Disable IPv6
Unless you are planning on using IPv6, it is good idea to disable it for security reasons. The rest of this document assumes IPv6 is disabled and thus all configuration is for IPv4 only.
To disable IPv6 edit the file /etc/default/grub
and add (or set) the following parameters:
GRUB_CMDLINE_LINUX="ipv6.disable=1"
GRUB_CMDLINE_LINUX_DEFAULT="ipv6.disable=1"
Update Grub configuration:
sudo update-grub
And then reboot the system.
To check that IPv6 is disabled you can grep for IPv6 in the dmesg
output:
dmesg | grep IPv6 IPv6: Loaded, but administratively disabled, reboot required to enable
3.2.4. Configure Static IP
To make lot of things easier and more predictable, a static network configuration is used for the server instead of DHCP. This also helps to prevent DHPC server from accidentally changing network configuration (especially DNS).
From now on in this document when talking about the server network configuration the following conventions will be used:
-
Server IP address:
{SERVER_IP_ADDR}
-
Default gateway:
{SERVER_DEFAULT_GATEWAY}
First, choose an IP address ({SERVER_IP_ADDR}
) and create a DHCP reservation on the DHCP server (most likely router) for it.
To update network configuration, edit the /etc/netplan/01-netcfg.yaml
file and update
the ethernets
section in it so that it matches desired network configuration:
ethernets:
enp2s0: (1)
addresses: [ {SERVER_IP_ADDR}/24 ] (2)
gateway4: {SERVER_DEFAULT_GATEWAY}
nameservers:
addresses: [ 127.0.0.1 ] (3)
dhcp4: no
1 | This is the wired interface name, it may be different on your system.
To find out what name to use check the ifconfig command output. |
2 | Replace this with your actual server IP address and your subnet size in bits (most likely 24). |
3 | Put your actual DNS server address here.
This is temporary, and will be set back to 127.0.0.1 once DNS server is configured. |
To apply new configuration do:
sudo netplan apply
You can also reboot the system to double check that everything works.
3.2.5. Disable ICMP Redirects and Source Routing
To disable ICMP redirects and IP source routing (for security reasons), edit the /etc/sysctl.conf
file
and uncomment the following lines:
net.ipv4.conf.all.accept_redirects = 0 net.ipv4.conf.all.send_redirects = 0 net.ipv4.conf.all.accept_source_route = 0
The changes will be applied after reboot.
3.2.6. Remove Unneeded Software
Now is a good time to remove or disable any software that you are not planning to use.
3.2.7. Uninstall Snap
There are many, many issues with Snap (aka snapd
) which I’m not going to describe here.
Unless you really need it, you can remove it with:
sudo apt autoremove --purge snapd
3.2.8. Uninstall LXC/LXD
Unless you are planning to use it, you can also remove LXC/LXD to free up some system resources:
sudo apt autoremove --purge lxd lxcfs liblxc-common lxd-client
It can always be installed later on if required.
3.3. SSD Optimization
Since the server is equipped with SSD, some configuration needs to be done in order to optimize SSD performance and minimize the number of writes (thus prolonging SSD life).
A lot of useful information about SSD-related configuration in Linux can be found in the Arch Wiki article about SSD [3].
3.3.1. TRIM
A TRIM command informs an SSD about what blocks are no longer used and can be recycled. Doing TRIM can improve SSD write performance.
First make sure that your SSD supports TRIM.
One way to do it is to check the output of the following command (replace /dev/sda
with your disk device):
sudo hdparm -I /dev/sda | grep TRIM * Data Set Management TRIM supported (limit 8 blocks)
If your disk doesn’t support TRIM, you can skip the rest of this section. |
TRIM needs to be enabled on all abstraction layers, which in the case of the silverbox server means on the file system level, LVM level and dm-crypt level.
Enabling TRIM on File System Level
Periodic file system TRIM should be enabled by default in Ubuntu 18.04.
There should be Systemd timer that performs fstrim
every week.
To check its status, do:
systemctl status fstrim.timer
Logs from previous runs can be viewed with:
journalctl -u fstrim.service
You can run the service manually and inspect the logs to make sure it works. To run the service:
sudo systemctl start fstrim.service
Enabling TRIM on LVM Level
Edit the /etc/lvm/lvm.conf
file and set issue_discards
parameter to 1 (it should be under the devices
section):
... devices { ... issue_discards = 1 ... } ...
Most likely it will already be there and set to 1 so you just need to double check.
Note that the issue_discards
parameter here only controls whether to send discards during operations on LVM volumes,
such as resizing or removing.
Discards for deleted files should be passed through by default.
Enabling TRIM on dm-crypt Level
Edit the /etc/crypttab
file and add discard
option to options for your device.
Below is an example for sda3
:
sda3_crypt UUID=XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX none luks,discard
Most likely it will already be there so you just need to double check.
Verifying that TRIM Works
A procedure for TRIM verification is described in this excellent StackOverflow answer: https://unix.stackexchange.com/questions/85865/trim-with-lvm-and-dm-crypt/85880#85880.
3.3.2. Write Caching
Write caching can greatly improve performance and minimize writes on SSD. It should be enabled by default.
To check if write caching is enabled, check the output of:
sudo hdparm -W /dev/sda (1) /dev/sda: write-caching = 1 (on)
1 | Replace /dev/sda with your disk device. |
Or alternatively:
sudo hdparm -i /dev/sda | grep WriteCache (1) ... WriteCache=enabled ...
1 | Replace /dev/sda with your disk device. |
3.3.3. Swappiness
Lowering system swappiness can increase the threshold of when memory pages will be swapped to disk, and thus potentially limit the number of writes to the SSD. More about swapiness here: https://en.wikipedia.org/wiki/Paging#Swappiness.
Current (default) system swappiness can be checked with:
sysctl vm.swappiness vm.swappiness = 60
If you decide to change it, this can be done by editing the /etc/sysctl.conf
file and adding (or chanigng)
parameter vm.swappiness
, for example:
... vm.swappiness = 40 ...
Change will be in effect after reboot.
3.3.4. Mounting /tmp as tmpfs
To minimize writes to SSD even further, the /tmp
directory can be mounted as tmpfs
(aka RAM file system) mount.
This can be either done with Systemd tmp.mount
unit or by editing fstab
.
According to the Systemd documentation (at least at the moment of writing), using the fstab
is the preferred approach.
To automatically mount /tmp
as tmpfs
, add the following line to the /etc/fstab
file:
tmpfs /tmp tmpfs defaults,noatime,nosuid,nodev,noexec,mode=1777,size=2G 0 0
In this example, its size is limited to 2G, but you can adjust it if needed.
A noexec option can sometime cause issues with programs that put something under /tmp and then try to execute it.
If this happens, you can remove this option.
|
Reboot the system and verify the output of df -h
to make sure /tmp
is now mounted as tmpfs
with the limit you’ve set.
It should contain line similar to this:
tmpfs 2.0G 0 2.0G 0% /tmp
3.3.5. Monitoring Tools
There are some tools that are useful for SSD monitoring, and will be used in the next sections.
The first one is hddtemp
, that is used to monitor disk temperature.
To install it do:
sudo apt install hddtemp
The second one is smartmontools
, that is used to monitor SSD wear (and other parameters) via SMART.
To install it do:
sudo apt install smartmontools --no-install-recommends
3.4. UPS Configuration
For a graceful server shutdown in case of power outage, the server needs to be able to communicate to the UPS to get its status. Usually, UPS is connected to the server by USB and Network UPS Tools (NUT) is used to get UPS status.
3.4.1. Network UPS Tools (NUT) Installation
NUT can be easily installed by doing:
sudo apt install nut
3.4.2. NUT Configuration
The NUT configuration consists of three different parts:
-
Configuring the driver (talks to UPS)
-
Configuring the server (
upsd
daemon, talks to the driver) -
Configuring the monitor (
upsmon
, monitors the server and takes action based on the received information)
Configuring NUT Driver
The driver type for NUT can be checked on the NUT Compatibility Page [2].
In the case of Eaton 3S UPS the driver is usbhid-ups
.
Edit the /etc/nut/ups.conf
file and append section for your UPS, for example:
[eaton3s] driver=usbhid-ups port=auto
Start the driver:
sudo upsdrvctl start
Configuring NUT Server
General upsd
configuration is done by editing the /etc/nut/upsd.conf
file, if necessary.
Edit the /etc/nut/nut.conf
file and change MODE
to standalone
:
MODE=standalone
A user for the monitor needs to be added to the /etc/nut/upsd.users
file (replace {SOME_PASSWORD}
with some random password):
[upsmon] password = {SOME_PASSWORD} upsmon master
The upsd
server can now be started with sudo systemctl start nut-server
command.
Once started successfully, the UPS info can be queried with:
upsc eaton3s
Configuring NUT Monitor
Change the MONITOR
value in the /etc/nut/upsmon.conf
file like so (use the same password you used in the previous step):
MONITOR eaton3s@localhost 1 upsmon {SOME_PASSWORD} master
Start the monitor service with sudo systemctl start nut-monitor
command.
At this point system should be configured to do graceful shutdown when UPS is on battery and battery reaches battery.charge.low
level.
The battery.charge.low
value can be obtained with upsc eaton3s | grep 'battery.charge.low'
.
3.4.3. Useful References
Some useful information to read about NUT:
-
Arch Wiki article about NUT: https://wiki.archlinux.org/index.php/Network_UPS_Tools
-
NUT Documentation: https://networkupstools.org/docs/user-manual.chunked/ar01s06.html
3.5. OpenSSH Server Configuration
To access the server over SSH, an OpenSSH server needs to be installed and configured.
It is important to make sure that server is on isolated network or directly connected to the PC from which it will be accessed. It is important to keep the server isolated until it is properly hardened and secured. |
3.5.1. OpenSSH Server Installation
OpenSSH server can be installed with:
sudo apt install openssh-server
3.5.2. Generating and Copying Key
Only key authentication will be enabled for SSH server (with password and other authentication methods disabled). However, it is convenient to copy the access key over SSH while password authentication still enabled. The following steps need to be performed on the client PC.
Generate key:
ssh-keygen -t ed25519 -f ~/.ssh/silverbox-key -C "Silverbox key"
Copy generated key to the server ({SERVER_USER}
assumed to be your used on the server):
ssh-copy-id -i ~/.ssh/silverbox-key {SERVER_USER}@{SERVER_IP_ADDR}
3.5.3. SSH Server Configuration & Security Hardening
The next step is to perform some basic configuration and security hardening for the SSH server.
SSH server is configured by modifying the /etc/ssh/sshd_config
file.
Change (or add, if not present) the following parameters in this file:
AddressFamily inet Protocol 2 HostKey /etc/ssh/ssh_host_ed25519_key HostKey /etc/ssh/ssh_host_rsa_key KexAlgorithms curve25519-sha256@libssh.org Ciphers chacha20-poly1305@openssh.com,aes256-gcm@openssh.com,aes256-ctr MACs hmac-sha2-512-etm@openssh.com,hmac-sha2-256-etm@openssh.com LogLevel VERBOSE LoginGraceTime 1m PermitRootLogin no MaxAuthTries 4 MaxSessions 5 ClientAliveCountMax 2 ClientAliveInterval 60 TCPKeepAlive no AuthenticationMethods publickey PubkeyAuthentication yes HostbasedAuthentication no IgnoreRhosts yes PasswordAuthentication no PermitEmptyPasswords no ChallengeResponseAuthentication no X11Forwarding no AllowAgentForwarding no AllowTcpForwarding local Banner none DebianBanner none AllowUsers {SERVER_USER}
Restart SSH server service for the changes to take effect:
sudo systemctl restart sshd
3.5.4. Additional Resources
Below are some useful resources related to SSH server configuration:
-
Mozilla SSH server configuration guide: https://infosec.mozilla.org/guidelines/openssh
-
Good overview of SSH server cryptography: https://stribika.github.io/2015/01/04/secure-secure-shell.html
Additionally, you can run SSH audit script [4] against your SSH server. It haven’t been updated since 2016 though.
3.6. Basic Firewall Configuration
By default, Ubuntu will already have Uncomplicated Firewall (UFW) installed, but it will be disabled (inactive).
Its status can be checked with:
sudo ufw status verbose
Before activating firewall, first enable rate limiting for SSH (so that the server can be accessed over SSH).
Rate limiting will allow access over the port, but will limit connection attempts to 6 attempts within 30 seconds.
For more flexible configuration (e.g. different number of attempts or duration) the iptables
have to be used directly,
as UFW doesn’t allow this kind of flexibility.
To enable SSH access with rate limiting do:
sudo ufw limit ssh comment "SSH Access"
You can optionally disable UFW logs (sometimes they can flood syslog if you have some service discovery protocols running):
sudo ufw logging off
Now UFW can be enabled with:
sudo ufw enable
In this configuration UFW will startup automatically and block all incoming connections (by default) but allow (rate limited) SSH. All outgoing connections are allowed.
4. Monitoring
A monitoring system is needed to monitor server’s hardware, system, environment, services and send notifications if something goes wrong or looks suspicious. As mentioned in the Goals section, the server needs to be as automated as possible, so the monitoring systems needs to be as autonomous as possible, and not require any supervision or maintenance.
A number of solutions was considered for the server monitoring (and quite few of them I tried), but most of them were discarded due to being too complicated to install and configure (and thus being an overkill for a home server), or due to having too many dependencies and bloat, or being outdated and not well supported, or the opposite - changing at insane pace. Some of the solutions that were considered are: Nagios, Zabbix, Netdata, Munin, Cacti and some combination of Prometheus/Grafana/Collectd/Graphite/Telegraf/InfluxDB.
Eventually, Monit was chosen as the main monitoring tool. While it may seem very limited at the first glance, it is actually quite powerful tool to monitor wide range of parameters and react to specific conditions. Monit is easy to extend and customize, it is maintained, very lightweight, with minimum dependencies and very low runtime overhead. The only significant downside is that it has no support for time series, and thus it is not possible to see historical data, graphs or analyze trends.
In addition to Monit, a simple script was created to generate and deliver regular emails with system summary information.
4.1. Monit
This section describes Monit installation and configuration for monitoring basic system parameters (such as CPU and RAM utilization, temperatures, etc.). Monitoring of specific services (e.g. DNS server, Docker, Nextcloud) is described in the corresponding sections. For example, section on DNS server setup describes how to add DNS server monitoring to Monit.
4.1.1. Installation
Monit can be installed directly from the repositories:
sudo apt install monit
This will create and start Systemd service for Monit. Its status can be checked with:
systemctl status monit
After installation, create a directory where custom scripts for Monit will be stored:
sudo mkdir -p /usr/local/etc/monit/scripts
4.1.2. Configuration
The main Monit configuration file is located at /etc/monit/monitrc
.
The defaults there are pretty sensible, but of course you can adjust them if you need to.
For example, to set check interval and startup delay to one minute:
set daemon 60 with start delay 60
CPU Temperature Script
System temperatures can be read directly from procfs
, but using lm-sensors
package makes it more convenient to parse.
To install the lm-sensors
package:
sudo apt install lm-sensors
As root
user, create CPU temperature monitoring script /usr/local/etc/monit/scripts/cpu_temp.sh
with the following content:
#!/bin/sh
RET=`sensors -Au 2> /dev/null | sed -n 's/_input//p' | sed 's/.\+:\s\+\([0-9]\+\).\+/\1/' | sort -n | tail -1`
exit $RET
And mark it as executable:
sudo chmod u+x /usr/local/etc/monit/scripts/cpu_temp.sh
This script simply finds the maximum temperature reported by all temperature sensors and returns it as exit code. If you using different hardware, you may need to modify this script.
Disk Temperature Script
As root
user, create disk temperature monitoring script /usr/local/etc/monit/scripts/disk_temp.sh
with the following content:
#!/bin/sh
UUID="$1"
DEV=$(readlink -f /dev/disk/by-uuid/"$UUID")
RET=`hddtemp -n SATA:"$DEV" 2> /dev/null`
exit $RET
And mark it as executable:
sudo chmod u+x /usr/local/etc/monit/scripts/disk_temp.sh
Similarly to previous script, this one also reads the temperature and returns it as the exit code.
For some SSDs (notably for Samsung EVO ones), the temperature data is returned in different smart field.
Some extra configuration may be required for hddtemp to read temperature data from such drives.
More details can be found in the Arch Wiki article on hddtemp [5].
|
UPS Battery Charge Script
As root
user, create UPS battery monitoring script /usr/local/etc/monit/scripts/ups_charge.sh
with the following content:
#!/bin/sh
RET=`upsc eaton3s battery.charge 2> /dev/null` (1)
exit $RET
1 | Replace eaton3s with your UPS name, exactly as it was configured in NUT. |
And mark it as executable:
sudo chmod u+x /usr/local/etc/monit/scripts/ups_charge.sh
This script simply returns UPS battery charge value (in percents) as the exit code.
Basic System Monitoring
To configure basic system monitoring with Monit, as root
user create /etc/monit/conf.d/10-system
file
and set its permissions to 600, so that only root
can read and edit it:
sudo touch /etc/monit/conf.d/10-system sudo chmod 600 /etc/monit/conf.d/10-system
Below is a working example of what you can put in this file to establish basic system monitoring with Monit. Read this file carefully, replace placeholders with the actual values and tweak the parameters as you need:
# Log to syslog instead of monit.log set log syslog # Set global SSL options set ssl { verify : enable, selfsigned : reject } # Listen only locally set httpd port {MONIT_PORT} (1) use address 127.0.0.1 allow localhost # Email address to receive alerts. Ignore trivial alerts. set alert {YOUR_EMAIL_ADDR} not on { instance, action } (2) # Set email format set mail-format { from: Monit <monit@$HOST> subject: $EVENT: $SERVICE message: $EVENT: $SERVICE Date: $DATE Action: $ACTION Host: $HOST Description: $DESCRIPTION } # Set mail server set mailserver {SMTP_SERVER_ADDR} port {SMTP_SERVER_PORT} (3) using tls # System performance check system $HOST if loadavg (1min) > 20 then alert if loadavg (5min) > 10 then alert if loadavg (15min) > 5 then alert if memory usage > 70% for 5 cycles then alert if swap usage > 5% then alert if cpu usage > 70% for 5 cycles then alert # Filesystem check filesystem rootfs with path / if space usage > 80% then alert if inode usage > 70% then alert if read rate > 2 MB/s for 10 cycles then alert if write rate > 1 MB/s for 10 cycles then alert # Network check network wired with interface {NET_INTERFACE_NAME} (4) if saturation > 90% for 5 cycles then alert if total uploaded > 10 GB in last day then alert if total downloaded > 10 GB in last day then alert # CPU Temperature check program cpu_temp with path "/usr/local/etc/monit/scripts/cpu_temp.sh" if status > 70 then alert if status < 15 then alert # Disk temperature check program disk_temp with path "/usr/local/etc/monit/scripts/disk_temp.sh {PART_UUID}" (5) if status > 60 then alert if status < 15 then alert # UPS battery check program ups_charge with path "/usr/local/etc/monit/scripts/ups_charge.sh" if status < 95 then alert
1 | Replace {MONIT_PORT} with the port number on which you want Monit to listen for web interface connections. |
2 | Replace {YOUR_EMAIL_ADDR} with email address to which Monit will deliver notifications. |
3 | Replace {SMTP_SERVER_ADDR} and {SMTP_SERVER_PORT} with SMTP server address and port respectively.
Usually you can get relaying SMTP server address/port from your ISP. |
4 | Replace {NET_INTERFACE_NAME} with your network interface name. |
5 | Replace {PART_UUID} with UUID of your /boot partition (can be copied from the /etc/fstab file).
This is needed in order not to rely on disk device names (e.g. /dev/sda ) as they can change. |
Restart Monit service for the changes to take effect:
sudo systemctl restart monit
You may notice that after running Monit,
the default Message of the Day complains about presence of the zombie processes.
At the moment of writing, when Monit invokes other programs (such as in check program instruction)
in creates zombie processes by design.
This zombie processes are handled correctly and don’t grow over time.
There is an open bug about this issue and hopefully it will be improved in the future releases of Monit.
|
4.1.3. Accessing Monit
Monit can be accessed via command line or web interface.
Command Line Interface
To see Monit information in command line try:
sudo monit status
or
sudo monit summary
Do sudo monit -h
to see all available options.
Web Interface
In this configuration, Monit only listens for local connections on 127.0.0.1:{MONIT_PORT}
.
This is done deliberately for security reasons.
One way to access Monit web interface from the outside is to do it through SSH tunnel. For example, from the client PC establish a SSH tunnel:
ssh {SERVER_USER}@{SERVER_IP_ADDR} -N -L 127.0.0.1:{LOCAL_PORT}:127.0.0.1:{MONIT_PORT}
Here {LOCAL_PORT}
is port on which SSH will be listening on the client PC.
Web interface now can be accessed on the client pc at http://127.0.0.1:{LOCAL_PORT}
.
To create this tunnel in more convenient way, you can add the following entry to your SSH config file ~/.ssh/config
:
host silverbox-monit-ui-tunnel HostName {SERVER_IP_ADDR} (1) IdentityFile ~/.ssh/silverbox-key LocalForward 127.0.0.1:{LOCAL_PORT} 127.0.0.1:{MONIT_PORT}
1 | IP can be replaced with the host name here after domain is configured. |
Now the tunnel can be established simply with:
ssh -N silverbox-monit-ui-tunnel
More convenient way of accessing Monit web interface and other internal services is described in the Reverse Proxy section. |
4.1.4. Useful References
Some useful sources of information about Monit:
-
Monit documentation: https://mmonit.com/monit/documentation/monit.html
-
Monit wiki: https://mmonit.com/wiki
-
Arch Wiki article on Monit: https://wiki.archlinux.org/index.php/Monit
4.2. Summary Email
The summary email is just an email that is delivered automatically at regular intervals (weekly) and contains some system information.
While there are tools that do similar job (for example logcheck and logwatch), I found them quite obsolete and noisy, with almost no documentation.
4.2.1. Email Content Generation
The email content is generated with a script, which is invoked by a Systemd timer. Here is a working example of such script (modify it to suit your needs):
#!/bin/sh
if [ $# -lt 1 ]; then
PERIOD="week"
else
PERIOD="$1"
fi
case $PERIOD in
day | week | month)
;;
*)
echo "Unknown time period: $PERIOD. Defaulting to week." 1>&2
PERIOD="week"
;;
esac
SINCE_DATE=`date --date="1 $PERIOD ago" +"%F %T"`
MAIN_DISK=$(readlink -f "/dev/disk/by-uuid/{PART_UUID}") (1)
echo "Subject: System Summary Report ($PERIOD)"
echo ""
echo "Report Period: $PERIOD"
echo "Report Generated: $(date)"
echo "Uptime: $(uptime)"
echo "Memory Usage: RAM: $(free -m | grep Mem | awk '{print $3/$2 * 100}')%, Swap: $(free -m | grep Swap | awk '{print $3/$2 * 100}')%"
echo "Disk (main): Temp: $(hddtemp -n SATA:"$MAIN_DISK"), Health: $(smartctl -H "$MAIN_DISK" | grep overall-health | sed 's/^.\+:\s\+//')"
echo "--------------------------------------------------------------------------------"
df -h
echo "--------------------------------------------------------------------------------"
echo "Temperatures:"
sensors -A
echo "--------------------------------------------------------------------------------"
echo "Top CPU:"
ps -eo pid,user,%cpu,%mem,cmd --sort=-%cpu | head
echo "--------------------------------------------------------------------------------"
echo "Top RAM:"
ps -eo pid,user,%cpu,%mem,cmd --sort=-%mem | head
echo "--------------------------------------------------------------------------------"
echo "SSH logins during this $PERIOD:"
last -s "$SINCE_DATE"
echo "--------------------------------------------------------------------------------"
echo "Last user logins:"
lastlog | grep -iv "Never logged in"
echo "--------------------------------------------------------------------------------"
echo "Logged errors:"
journalctl --since "$SINCE_DATE" -p err --no-pager
1 | Replace {PART_UUID} with UUID of your /boot partition (can be copied from the /etc/fstab file). |
When this script is called without arguments it will generate weekly summary.
It can also be called with an argument, such as: day , week or month , to generate summary for the specific period.
|
Save this file as /usr/local/sbin/system-summary.sh
and mark is as executable:
sudo chmod u+x /usr/local/sbin/system-summary.sh
To verify that it works do sudo system-summary.sh
and check the output.
4.2.2. Email Delivery
To send emails the ssmtp
program is used.
This is extremely lightweight tool with minimum dependencies, that sends mail to a configured mail server.
Since ssmtp is not an actual mail server (or MTA), you will need some SMTP server to send mail.
You can use one provided by your ISP or any of free ones; however, the security and privacy of such setup is
at best - questionable. This is why ssmtp only used in this guide for non-sensitive mail, such as monitoring
emails and system status emails.
|
To install ssmtp
do:
sudo apt install ssmtp
Edit the /etc/ssmtp/ssmtp.conf
file, set root
option to your desired email address,
mailhub
to your SMTP server address and enable the use of TLS and STARTTLS:
root={YOUR_EMAIL_ADDR} (1) mailhub={SMTP_SERVER_ADDR}:{SMTP_SERVER_PORT} (2) UseTLS=Yes UseSTARTTLS=Yes
1 | This email address will receive all mail for UIDs < 1000. |
2 | Set this to your SMTP server address and port. |
There are other parameters that may need to be configured (for example, if your SMTP server requires authentication).
The Arch Wiki article on ssmtp
is a good source of information on this topic -
https://wiki.archlinux.org/index.php/SSMTP.
To test that email delivery works try: echo "test" | ssmtp root
.
If this command is successful but email is not delivered, it was probably filtered.
You can run ssmtp with -v argument to get more verbose output, but I found no good solution to troubleshoot
filtering issues. Sometimes changing sender address, subject, content and hostname helps to avoid filtering.
|
As root user, create /etc/systemd/system/system-summary-report.service
file with the following content:
[Unit] Description=Email system summary report After=network-online.target [Service] Type=oneshot ExecStart=/bin/sh -c '/usr/local/sbin/system-summary.sh | ssmtp root'
You can run this service manually and verify that you get the email:
sudo systemctl daemon-reload sudo systemctl start system-summary-report.service
As root user, create /etc/systemd/system/system-summary-report.timer
file with the following content:
[Unit] Description=Email system summary report [Timer] OnCalendar=Fri 18:00 (1) AccuracySec=1h Persistent=true [Install] WantedBy=timers.target
1 | Adjust this as needed, especially if using period other than week. |
Enable and start the timer:
sudo systemctl daemon-reload sudo systemctl enable system-summary-report.timer sudo systemctl start system-summary-report.timer
To check timer status and time until next activation use:
sudo systemctl list-timers
4.3. Login Notification
If you want to get an email notification for each login to the server,
create the /usr/local/sbin/login-notify.sh
file with the following content:
#!/bin/sh
TRUSTED_HOSTS="" (1)
[ "$PAM_TYPE" = "open_session" ] || exit 0
for i in $TRUSTED_HOSTS; do
if [ "$i" = "$PAM_RHOST" ]; then
exit 0
fi
done
MSG="Subject: Login Notification\n\n\
Date: `date`\n\
User: $PAM_USER\n\
Ruser: $PAM_RUSER\n\
Rhost: $PAM_RHOST\n\
Service: $PAM_SERVICE\n\
TTY: $PAM_TTY\n"
echo "$MSG" | ssmtp root
1 | You can set TRUSTED_HOSTS variable to a space-delimited list of addresses
for logins from which you don’t want to generate notifications. |
Mark this file as executable:
sudo chmod u+x /usr/local/sbin/login-notify.sh
Edit the /etc/pam.d/common-session
file and append the following line to it:
session optional pam_exec.so /usr/local/sbin/login-notify.sh
For some reason, the common-session
file is not included in /etc/pam.d/sudo
(even though the relevant Debian bug was closed [2]).
So if you also want to get notifications for sudo
command, you will need to append the same line
to the /etc/pam.d/sudo
file as well.
5. DNS Server
This section describes how to install and configure a DNS server, which will serve clients on the local network. Client devices in the LAN will use this DNS server as the default DNS server (can be announced by the DHCP server), and the DNS server will forward queries securely (using DNS over TLS and DNSSEC) to the DNS server of your choice (this configuration uses Cloudflare’s DNS server).
5.1. Installation
Unbound [6] is used as the DNS server.
It can be installed directly from the repositories with:
sudo apt install unbound
5.2. Configuration
Ubuntu 18.04 uses systemd-resolved
as the default DNS server.
To switch to Unbound, first systemd-resolved
stub listener needs to be disabled.
To do this, first edit the /etc/systemd/resolved.conf
file and set the following parameter:
DNSStubListener=no
Then restart the systemd-resolved
service:
sudo systemctl restart systemd-resolved
You can also verify that systemd-resolved
is not listening on port 53 anymore by checking the output of:
sudo netstat -lnptu
To configure Unbound as a simple forwarding DNS server create the /etc/unbound/unbound.conf.d/dns-config.conf
file
with the following content:
server:
interface: 0.0.0.0
outgoing-interface: {SERVER_IP_ADDR} (1)
access-control: 127.0.0.0/8 allow
access-control: {SERVER_SUBNET} allow (2)
do-ip4: yes
do-ip6: no
do-udp: yes
do-tcp: yes
minimal-responses: yes
prefetch: yes
qname-minimisation: yes
hide-identity: yes
hide-version: yes
use-caps-for-id: yes
private-address: 192.168.0.0/16
private-address: 172.16.0.0/12
private-address: 10.0.0.0/8
unwanted-reply-threshold: 10000
root-hints: /usr/share/dns/root.hints
forward-zone:
name: "."
# tls-cert-bundle: /etc/ssl/certs/ca-certificates.crt (3)
forward-ssl-upstream: yes
forward-addr: 1.1.1.1@853#one.one.one.one (4)
forward-addr: 1.0.0.1@853#one.one.one.one (5)
remote-control:
control-interface: 127.0.0.1
1 | Replace this with your server address. |
2 | Replace this with your LAN subnet. |
3 | This line is commented because Unbound 1.6.7 (default Ubuntu 18.04 version at the moment of writing) does not support this parameter. Without it there is no validation, however, the queries are still encrypted. Uncomment this line once Ubuntu gets newer version of Unbound. |
4 | Primary DNS server address. |
5 | Secondary DNS server address. |
This configuration uses Cloudflare’s DNS servers [7], due to their reasonable privacy policy and support for DNS over TLS and DNSSEC. Feel free to replace with DNS server of your choice. |
It is also possible to make Unbound to block DNS requests to certain known advertisement/analytics addresses (similarly to what Pi-hole does) but this is outside of the scope of this document. |
The configuration above is for Unbound 1.6.7. Some parameters were added/modified in more recent version, so this config may need to be updated once Ubuntu package is upgraded to more recent version. |
The tls-cert-bundle because Unbound 1.6.7 (default Ubuntu 18.04 version at this moment) does not support this paremeter yet. Without it there won’t be any authentication, but the queries still are encrypted. Uncomment this line once Ubuntu gets newer version of Unbound. |
Next, remove the /etc/resolv.conf
file (which is a link to SystemD resolver’s file):
sudo rm /etc/resolv.conf
The systemd-resolved
should detect it automatically and stop generating resolv.conf
contents.
Now you can create a new /etc/resolv.conf
file with the following content:
nameserver 127.0.0.1 nameserver {SERVER_IP_ADDR} (1)
1 | Replace {SERVER_IP_ADDR} with the actual server IP address.
While it doesn’t make sense to have this line together with 127.0.0.1 , it is needed for the Docker’s embedded
DNS to work properly.
At the moment of writing, Docker incorrectly filters out all localhost records from the resolv.conf ,
so this record is necessary to force it to use host’s DNS server. |
Restart the systemd-resolved
and unbound
services:
sudo systemctl restart systemd-resolved sudo systemctl restart unbound
Check that the DNS resolution is working on the server.
To verify that DNSSEC is working you can check the output of the following command:
dig weberdns.de ... ;; flags: qr rd ra ad; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1 ...
And verify that response has an ad
flag present.
To also verify that DNS queries are now encrypted check the output of:
sudo tcpdump -vv -x -X -s 1500 -i {NETWORK_INTERFACE} 'port 853'
While doing any DNS query.
5.2.1. Adding Firewall Rule
To allow incoming DNS requests from the LAN do:
sudo ufw allow proto tcp from {SERVER_SUBNET} to any port 53 comment "DNS TCP" sudo ufw allow proto udp from {SERVER_SUBNET} to any port 53 comment "DNS UDP"
5.3. Updating DHCP Server Configuration
Now you can change your router’s DHCP settings and set your server address as your DNS server. Thus all devices on the LAN will switch to using this DNS server automatically.
5.4. Monitoring
The DNS server will be monitored with Monit, which should by now be configured.
Create the /etc/monit/conf.d/30-unbound
file with the following content:
check process unbound with pidfile /var/run/unbound.pid if does not exist then alert if cpu > 10% for 5 cycles then alert if total memory > 200.0 MB for 5 cycles then alert if failed port 53 type udp protocol dns then alert if failed port 53 type tcp protocol dns then alert check program unbound_stats with path "/usr/sbin/unbound-control stats" every 5 cycles if status != 0 then alert
This will make Monit to check that Unbound process is running and DNS server is accessible over TCP and UDP
and not consuming suspicious amounts of CPU and RAM.
In addition to that, it will grab the Unbound stats every 5 cycles
(which is 5 minutes, if you set cycle duration to a minute).
The Unbound stats are cleared each time stats
command is executed, so in this case Monit will essentially
show the stats for the last 5 minutes.
Restart the Monit service:
sudo systemctl restart monit
Check Monit web interface and make sure that DNS monitoring is working.
6. Docker
This section describes how to setup Docker CE engine and Docker Compose. Docker will be required to run some workloads (such as Nextcloud, or Transmission) inside containers.
6.1. Installation
6.1.1. Docker CE
To install Docker CE engine follow the instructions from the docker documentation: https://docs.docker.com/install/linux/docker-ce/ubuntu/#install-docker-ce.
For security reasons, it may be good idea not to add your user to the docker group.
Membership in the docker group essentially grants user the root permissions, without requiring to enter
password for privilege elevation (unlike sudo ).
|
6.1.2. Docker Compose
Docker Compose [8] is a useful tool for deploying and managing multi-container workloads.
At the moment of writing, the preferred method of installation for Ubuntu was simply grabbing the latest binary from the GitHub releases page. The downside is that there won’t be any automatic upgrades and newer versions of Docker Compose will have to be installed manually.
This guide has been updated to use Docker Compose v2 (complete rewrite of the Docker Compose in Golang). If you have older Docker Compose version, make sure you remove it and install the version 2. |
To find the latest Docker Compose version number, visit the GitHub releases page at: https://github.com/docker/compose/releases.
Next, download the docker-compose
binary:
sudo curl -L "https://github.com/docker/compose/releases/download/{COMPOSE_VERSION}/docker-compose-$(uname -s)-$(uname -m)" -o /usr/libexec/docker/cli-plugins/docker-compose (1)
1 | Replace {COMPOSE_VERSION} with the actual latest Docker Compose version. |
And mark it as executable:
sudo chmod +x /usr/libexec/docker/cli-plugins/docker-compose
Verify that Docker Compose works by doing:
docker compose version
If you see an error similar to this:
error while loading shared libraries: libz.so.1: failed to map segment from shared object
then remove noexec option from the /tmp partition.
See Mounting /tmp as tmpfs for more details.
|
6.2. Monitoring
To setup basic monitoring of the Docker daemon with Monit,
create the /etc/monit/conf.d/20-docker
file with the following content:
check process docker with pidfile /var/run/docker.pid if does not exist then alert if cpu > 30% for 10 cycles then alert if total memory > 300.0 MB for 5 cycles then alert
Restart Monit service:
sudo systemctl restart monit
Check Monit web interface and verify that Docker monitoring is working.
7. SOCKS5 Over VPN
This section describes how to setup a SOCKS5 proxy on the server, such that it will redirect all traffic through the VPN connection.
The reason it may be useful (as opposed to just running everything over VPN connection on the client PC) is to allow more granular control over what application’s traffic goes through VPN connection. For example, one may choose to direct all web browser traffic over a VPN connection, while Steam and music streaming apps will access the internet directly.
The way it is achieved is by running an OpenVPN client inside a docker container together with SOCKS5 server in such a way that traffic received by the SOCKS5 server will be forwarded via the VPN tunnel and vice versa.
Since SOCKS5 protocol offers very weak authentication and no encryption, it will be additionally encapsulated in SSH tunnel.
Below is a diagram that demonstrates the idea:
Client PC ----------------- | | Silverbox Server | ------------- | ------------------------------ | | Application | | | | | ------------- | | Container | | | SOCKS5 | | ------------------------ | | v | /--------------------\ | | | | | 127.0.0.1:XXXX ------ SSH Tunnel -----------SOCKS5 Server | | Internet ----------------- \--------------------/ | | | --<VPN>--------> | | \ routing / | | | | ------- | | | ------------------------ | ------------------------------
The prerequisites to this section are having Docker installed and having a VPN provider that supports OpenVPN and can provide OpenVPN profiles (or information on how to create them).
7.1. Image
This section describes how to prepare and build Docker image for the VPN proxy.
7.1.1. Preparing OpenVPN Profiles and Credentials
The container will need access to the OpenVPN profiles and your VPN credentials, to establish the VPN connection. These files will be stored on disk and mounted inside the container as volumes.
First, create a directory named silverbox
(or choose a different name if you like) under the root
user’s home,
and set permissions so that only root
user can effectively use it:
sudo mkdir /root/silverbox sudo chmod 700 /root/silverbox
This directory will be used to hold files that need to be persistently stored on the server, but contain sensitive data. It is not just for his VPN proxy container, but for other services provided by the server as well.
Next, create a vpn
directory, which will hold files related to the VPN connections:
sudo mkdir /root/silverbox/vpn sudo chmod 700 /root/silverbox/vpn
Inside it, create auth
directory:
sudo mkdir /root/silverbox/vpn/auth sudo chmod 700 /root/silverbox/vpn/auth
Inside the auth
directory create a credentials
file, containing two lines:
-
First line is the VPN username
-
Second line is the VPN password
Set the following permissions on this file (marks file as read only and only readable by root
):
sudo chmod 400 /root/silverbox/vpn/auth/credentials
Next, create a proxy
directory under the vpn
:
sudo mkdir /root/silverbox/vpn/proxy sudo chmod 700 /root/silverbox/vpn/proxy
And copy all OpenVPN profile files (*.ovpn
) that you want to use for SOCKS5 over VPN proxy into it
(you can have multiple files as exit point rotation will be configured).
At any time, if you want only to use specific profile, create a symlink named profile pointing to desired
profile. For example ln -s SOME_PROFILE.ovpn profile .
Make sure the link is relative and not absolute: otherwise it won’t work inside the container.
|
7.1.2. Creating Docker Network
Using separate Docker network to run VPN-related workloads offers some convenience, isolation, and also will force Docker to use embedded DNS server which will forward all DNS queries to the host DNS server.
To create Docker bridge network named vpn
do:
sudo docker network create --driver=bridge --subnet={DOCKER_VPN_NETWORK} vpn (1)
1 | Replace {DOCKER_VPN_NETWORK} with the some subnet for the Docker VPN network. For example: 172.18.0.0/24 . |
7.1.3. Preparing Image Files
First, create a directory named containers
under the /root/silverbox
:
sudo mkdir /root/silverbox/containers sudo chmod 700 /root/silverbox/containers
This directory will contain everything required to build different images for Docker containers (for VPN proxy and other containers).
Inside it, create a directory for the VPN proxy container:
sudo mkdir /root/silverbox/containers/vpn-proxy sudo chmod 700 /root/silverbox/containers/vpn-proxy
Create a directory where the VPN proxy container will store its host identity SSH key, so that it will be persistent between the container upgrades.
sudo mkdir /root/silverbox/containers/vpn-proxy/host-key sudo chmod 700 /root/silverbox/containers/vpn-proxy/host-key
Inside the vpn-proxy
directory, create a file named docker-compose.yaml
with the following content:
version: '3.8'
networks:
default:
name: vpn
external: true
services:
vpn-proxy:
container_name: vpn-proxy
init: true
build:
context: /root/silverbox/containers/vpn-proxy
args:
version: '11.5-slim' (1)
restart: on-failure:15
logging:
driver: json-file
options:
max-size: 10mb
ports:
- {SERVER_IP_ADDR}:{VPN_PROXY_PORT}:{VPN_PROXY_PORT}/tcp (2)
networks:
default:
ipv4_address: {VPN_PROXY_ADDR} (3)
devices:
- /dev/net/tun
cap_add:
- NET_ADMIN
volumes:
- /root/silverbox/vpn/proxy:/vpn-profiles
- /root/silverbox/vpn/auth:/vpn-credentials
- /root/silverbox/containers/vpn-proxy/host-key:/ssh-host-key
1 | Replace 11.5-slim with the actual latest debian image version (can be checked at the Docker Hub).
Don’t use latest here as it makes setup non-deterministic and makes it harder to maintain and upgrade. |
2 | Replace {SERVER_IP_ADDR} and {VPN_PROXY_PORT} with the actual values. |
3 | Replace {VPN_PROXY_ADDR} with desired VPN proxy container address. |
Next, create a file named Dockerfile
with the following content:
ARG version=latest
FROM debian:$version
RUN apt-get update && \
apt-get install -y --no-install-recommends bash iputils-ping iptables openvpn openssh-server && \
addgroup --system vpn && \
adduser proxytunnel --disabled-password --gecos "" --shell /usr/sbin/nologin && \
mkdir /var/run/sshd
COPY docker-entrypoint.sh /usr/local/bin/
COPY sshd_config /etc/ssh/
COPY --chown=proxytunnel:proxytunnel proxy_tunnel.pub /home/proxytunnel
HEALTHCHECK --interval=120s --timeout=20s --start-period=120s CMD ping 1.1.1.1 -c 1 (1)
VOLUME ["/vpn-profiles", "/vpn-credentials"]
EXPOSE {VPN_PROXY_PORT}/tcp (2)
ENTRYPOINT [ "/usr/local/bin/docker-entrypoint.sh" ]
1 | Feel free to customize health check command. |
2 | Replace {VPN_PROXY_PORT} with some port number of your choice (e.g. 12345). |
Next, create a sshd_config
file with the following content:
Protocol 2 HostKey /etc/ssh/ssh_host_ed25519_key KexAlgorithms curve25519-sha256@libssh.org Ciphers aes128-gcm@openssh.com (1) MACs hmac-sha2-512-etm@openssh.com,hmac-sha2-256-etm@openssh.com AddressFamily inet ListenAddress {VPN_PROXY_ADDR}:{VPN_PROXY_PORT} (2) LogLevel ERROR LoginGraceTime 1m PermitRootLogin no MaxAuthTries 4 MaxSessions 5 AuthenticationMethods publickey PubkeyAuthentication yes HostbasedAuthentication no IgnoreRhosts yes PasswordAuthentication no PermitEmptyPasswords no ChallengeResponseAuthentication no X11Forwarding no Banner none AllowAgentForwarding no AllowTcpForwarding yes PermitTTY no AllowUsers proxytunnel AuthorizedKeysFile /home/proxytunnel/proxy_tunnel.pub
1 | This cipher was chosen after testing performance of different ciphers on the given hardware. It offers reasonable performance while maintaining decent security. Feel free to change the cipher if you need to. |
2 | Replace {VPN_PROXY_ADDR}:{VPN_PROXY_PORT} with some IP address from the {DOCKER_VPN_NETWORK}
and the {VPN_PROXY_PORT} port number. |
Next, create a docker-entrypoint.sh
file with the following content:
#!/usr/bin/env bash
function configure_iptables()
{
set -e
local config_file="$1"
local host=$(awk '/^remote / {print $2}' "$config_file")
local port=$(awk '/^remote / && NF ~ /^[0-9]*$/ {print $NF}' "$config_file")
if [ -z "$port" ]; then
echo "-- No port number specified in the VPN profile file"
exit 1
else
echo "-- Setting up firewall rules for VPN server $host on port $port"
fi
iptables --flush
iptables --delete-chain
iptables --policy INPUT DROP
iptables --policy OUTPUT DROP
iptables --policy FORWARD DROP
iptables -A INPUT -i lo -j ACCEPT
iptables -A INPUT -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
(1)
iptables -A INPUT -p tcp --dport {VPN_PROXY_PORT} -m conntrack --ctstate NEW -m recent --set --name SSH --mask 255.255.255.255 --rsource
iptables -A INPUT -p tcp --dport {VPN_PROXY_PORT} -m conntrack --ctstate NEW -m recent --update --seconds 30 --hitcount 6 --name SSH --mask 255.255.255.255 --rsource -j DROP
iptables -A INPUT -p tcp --dport {VPN_PROXY_PORT} -m conntrack --ctstate NEW -j ACCEPT
iptables -A OUTPUT -o lo -j ACCEPT
iptables -A OUTPUT -o tun0 -j ACCEPT
iptables -A OUTPUT -o eth0 -d {SERVER_SUBNET} -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT (2)
iptables -A OUTPUT -o eth0 -p tcp -d $host --dport $port -m owner --gid-owner vpn -j ACCEPT
set +e
}
function run_sshd()
{
set -e
if [ ! -f "/etc/ssh/ssh_host_ed25519_key" ]; then
if [ ! -f "/ssh-host-key/ssh_host_ed25519_key" ]; then
echo "-- Generating host key"
ssh-keygen -f /etc/ssh/ssh_host_ed25519_key -N '' -t ed25519
cp /etc/ssh/ssh_host_ed25519_key /ssh-host-key/ssh_host_ed25519_key
else
cp /ssh-host-key/ssh_host_ed25519_key /etc/ssh/ssh_host_ed25519_key
fi
fi
echo "-- Adding route back to LAN"
ip route add {SERVER_SUBNET} via {DOCKER_VPN_NETWORK_GW} (3)
echo "-- Starting SSH server"
/usr/sbin/sshd
set +e
}
if [[ $# -ge 1 ]]; then
exec "$@"
else
if [ -f /vpn-profiles/profile ]; then
echo "-- Profile file found: only it will be used"
PROFILE_FILE="/vpn-profiles/profile"
else
echo "-- Profile file not found: random profile file will be picked"
PROFILE_FILE="$(ls -1 /vpn-profiles/*.ovpn | shuf -n 1)"
echo "-- Selected profile file: $PROFILE_FILE"
fi
configure_iptables "$PROFILE_FILE"
run_sshd
exec sg vpn -c "openvpn --config $PROFILE_FILE --verb 1 --auth-user-pass /vpn-credentials/credentials --auth-nocache"
fi
1 | This block establishes rate limiting for incoming SSH connections.
Replace {VPN_PROXY_PORT} with the actual port number in it. |
2 | Replace {SERVER_SUBNET} with your LAN subnet. |
3 | Replace {SERVER_SUBNET} with your LAN subnet.
Also, replace {DOCKER_VPN_NETWORK_GW} with the default gateway for your {DOCKER_VPN_NETWORK}
(ends with 1, i.e. for network 172.18.0.0/12 it will be 172.18.0.1). |
Mark docker-entrypoint.sh
as executable:
sudo chmod a+x docker-entrypoint.sh
7.1.4. Generating Client SSH Key
This section describes how to generate client SSH key that will be used to authenticate to the SSH server that is running inside the container.
On the client PC (from which you will connect to the proxy) generate a new SSH key with the following command (don’t use any passphrase, as the tunnel will be established automatically):
ssh-keygen -t ed25519 -f ~/.ssh/silverbox-proxy-tunnel -C "Silverbox proxy tunnel key"
Copy public key to the server:
scp ~/.ssh/silverbox-proxy-tunnel.pub $USER@{SERVER_IP_ADDR}:proxy_tunnel.pub
Move this file under the /root/silverbox/containers/vpn-proxy
directory and make it only readable by the root:
sudo chown root:root proxy_tunnel.pub sudo chmod 400 proxy_tunnel.pub
7.2. Container
This section describes how to run the VPN proxy container.
7.2.1. Running Container
To build the image and run the container do:
sudo docker compose -f /root/silverbox/containers/vpn-proxy/docker-compose.yaml up -d
When you run container that exposes some ports to the host interface, by default Docker will automatically add netfilter rules to allow forwarding for these ports. That’s why there’s no need to add UFW rule for the proxy tunnel. |
When started this way, container will be automatically restarted in case of failure (up to 15 consecutive restarts).
7.2.2. Automatic Container Startup
To start container automatically on boot create the /etc/systemd/system/vpn-proxy-start.service
file
with the following content:
[Unit] Description=Start VPN proxy container Requires=docker.service After=docker.service [Service] Type=oneshot ExecStart=/usr/bin/docker start vpn-proxy [Install] WantedBy=multi-user.target
Enable the service, so that it will be started on system boot:
sudo systemctl daemon-reload sudo systemctl enable vpn-proxy-start.service
7.2.3. Automatic VPN Server Rotation
The idea of VPN server rotation is to restart VPN proxy container periodically, so that every time it starts it will pick up new random VPN profile and thus switch to a new VPN server. This may be useful for privacy and security reasons, however, this step is optional.
The rotation is achieved by creating a simple Systemd timer, that, when triggered, will restart VPN proxy container.
Create the /etc/systemd/system/vpn-proxy-restart.service
file with the following content:
[Unit] Description=Restart VPN proxy container Requires=docker.service After=docker.service [Service] Type=oneshot ExecStart=/usr/bin/docker restart vpn-proxy
You can run the service once to verify that it works and restarts the container:
sudo systemctl daemon-reload sudo systemctl start vpn-proxy-restart.service
Netx, create the /etc/systemd/system/vpn-proxy-restart.timer
file with the following content:
[Unit] Description=Restart VPN proxy container [Timer] OnCalendar=*-*-* 01:00:00 AccuracySec=1h Persistent=true [Install] WantedBy=timers.target
In this configuration, the timer will be activated every day at 1am.
Enable and start the timer:
sudo systemctl daemon-reload sudo systemctl enable vpn-proxy-restart.timer sudo systemctl start vpn-proxy-restart.timer
You can do systemctl list-timers
to verify that the timer appears in the output
and to check the time till next activation.
7.3. Monitoring
The status of the VPN proxy container is monitored by Monit.
First, create the /usr/local/etc/monit/scripts/container_status.sh
file with the following content:
#!/bin/sh
STATUS=$(docker inspect --format="{{$2}}. Started: {{.State.StartedAt}}. Restarts: {{.RestartCount}}." "$1")
echo $STATUS
case "$STATUS" in
"$3"*) exit 0 ;;
*) exit 1 ;;
esac
And mark it as executable:
sudo chmod u+x /usr/local/etc/monit/scripts/container_status.sh
This script checks given Docker container status field and compares it against desired value.
Next, create the /etc/monit/conf.d/40-docker-vpn-proxy
file with the following content:
check program vpn_proxy with path "/usr/local/etc/monit/scripts/container_status.sh vpn-proxy .State.Health.Status healthy" if status != 0 for 10 cycles then alert
Restart Monit service:
sudo systemctl restart monit
Check Monit web interface and make sure that VPN proxy monitoring is working.
7.4. Client Configuration
On the client PC, create an entry in the ~/.ssh/config
file (to simplify tunnel creation):
host silverbox-proxy-tunnel HostName {SERVER_IP_ADDR} (1) Port {VPN_PROXY_PORT} (2) User proxytunnel IdentityFile ~/.ssh/silverbox-proxy-tunnel DynamicForward 127.0.0.1:{VPN_PROXY_PORT} (3)
1 | Replace with your server address. |
2 | Replace with the VPN proxy port number. |
3 | Replace with the VPN proxy port number. |
Establish the tunnel manually for testing:
ssh -N silverbox-proxy-tunnel
To verify that proxy works over VPN, you can run the following commands and verify that returned IPs are different:
curl -v -x socks5://127.0.0.1:{VPN_PROXY_PORT} http://api.ipify.org?format=json curl -v http://api.ipify.org?format=json
7.4.1. Automatic Tunnel Creation
This section describes how to establish tunnel automatically on user login into the Gnome session on Ubuntu 18.04.
Create a script vpn-proxy-tunnel.sh
somewhere with the following content:
#!/bin/bash
while true; do
ssh -N silverbox-proxy-tunnel &>/dev/null
notify-send --urgency=normal -i error "VPN proxy tunnel disconnected. Retrying in 20 seconds."
sleep 20s
done
Mark it as executable:
chmod a+x vpn-proxy-tunnel.sh
Add this script to your client PC auto start. On Ubuntu 18.04 it can be added using the “Startup Applications” GUI tool.
Reboot the client PC and verify that script is running and tunnel works.
Now for all applications that you wish to use VPN (for example web browser),
you can configure use of SOCKS5 proxy server 127.0.0.1:{VPN_PROXY_PORT}
.
8. Domain Name
Having own domain name can be useful for a variety of reasons. In this guide, it is used for the following purposes:
-
To configure proper domain for Kerberos (instead of doing domain hijacking).
-
To access the server from the outside (from the internet) without having a static IP address.
-
To obtain certificates for Nextcloud (to use HTTPS) or other services.
A domain name needs to be registered in order to proceed with this section of the document.
Domain name registration and payment process may take a few days, so it is better to start it in advance. |
The main requirement to the domain name registrar is to provide an API that allows changing DNS records (at least host A and TXT records). After evaluating a few options, NameSilo [9] was chosen as the domain name registrar, so this document provides DNS records update script for its API. If you decide to use different domain name registrar, you’ll have to write similar script by yourself.
In this document, the domain name is referred as {DOMAIN_NAME}
(an example of domain name is example.com
).
However, this domain name itself is not used directly.
Instead, a subdomain is used, which referred as {SERVER_SUBDOMAIN}
(an example would be silverbox.example.com
).
So the FQDN of the server as it seen from the internet is {SERVER_SUBDOMAIN}
.
This approach offers some flexibility, for example, the domain name itself (e.g. example.com
)
can be used for different purposes (like hosting some website), while subdomain (e.g. silverbox.example.com
)
is used to access the server over SSH.
Some scripts listed in this document have one limitation: only domain names that consists of two components
(i.e. one subdomain after TLD) are supported. For example, domain name example.com is supported while
example.co.uk is not.
|
8.1. Dynamic DNS Update
This section describes how to setup automatic DNS host A record update with the current public address of the server. This way the server can be accessed from the internet by its FQDN, even if its public IP address is dynamic.
The dynamic DNS record update is initiated by Systemd timer, that runs Docker container with a python script that uses NameSilo API to make an update. The Docker container is used for convenience, isolation and resource limiting (NameSilo API uses XML which can exhaust system resources while parsing malformed or maliciously constructed XML).
8.1.1. Prerequisites
First, login to your NameSilo account and generate API key, which will be used to authenticate to the NameSilo API.
Keep the API key secure, as it grants complete access to your NameSilo account. |
Then create a host A DNS record for the {SERVER_SUBDOMAIN}
with any content.
This entry needs to be created manually because the DNS update script only updates existing record
but doesn’t create it.
8.1.2. Creating Docker Network
First, create a separate Docker network that will be used to run containers other than VPN-related containers:
sudo docker network create --driver=bridge --subnet={DOCKER_COMMON_NETWORK} common (1)
1 | Replace {DOCKER_COMMON_NETWORK} with the some subnet for the common Docker network. For example: 172.19.0.0/24 . |
8.1.3. Preparing Image Files
This section assumes that all steps from the SOCKS5 Over VPN section have been completed.
Create a directory for the DNS records updater container:
sudo mkdir /root/silverbox/containers/dns-updater sudo chmod 700 /root/silverbox/containers/dns-updater
Inside the dns-updater
directory, create a file named Dockerfile
with the following content:
FROM debian:11.5-slim (1)
RUN apt-get update && \
apt-get install -y --no-install-recommends python3 ca-certificates
COPY update-dns.py /usr/local/bin/
VOLUME /secrets (2)
ENTRYPOINT [ "python3", "/usr/local/bin/update-dns.py" ]
1 | Replace 11.5-slim with the actual latest debian image version (can be checked at the Docker Hub). |
2 | For the lack of better option, the API key is passed inside the container in a file on a mapped volume. |
Next, create the update-dns.py
file with the following content:
#!/usr/bin/env python3
import json
import argparse
import urllib.request
import urllib.parse
from xml.dom import minidom
DEFAULT_HEADERS={ 'User-Agent': 'curl/7.58.0' } (1)
def namesilo_url(operation, api_key):
return 'https://www.namesilo.com/api/' + operation + '?version=1&type=xml&key=' + api_key
def check_reply_code(doc, code):
actual=doc.getElementsByTagName('reply')[0].getElementsByTagName('code')[0].firstChild.nodeValue
if actual != str(code):
raise BaseException('Expecting code {} got {}'.format(code, actual))
def get_dns_record(rec_type, subdomain, domain, api_key):
response=urllib.request.urlopen(urllib.request.Request(url=(namesilo_url('dnsListRecords', api_key) + '&domain=' + domain), headers=DEFAULT_HEADERS), timeout=30)
doc=minidom.parseString(response.read())
check_reply_code(doc, 300)
for e in doc.getElementsByTagName('reply')[0].getElementsByTagName('resource_record'):
if e.getElementsByTagName('host')[0].firstChild.nodeValue == subdomain and e.getElementsByTagName('type')[0].firstChild.nodeValue == rec_type:
return { 'val': e.getElementsByTagName('value')[0].firstChild.nodeValue,
'id': e.getElementsByTagName('record_id')[0].firstChild.nodeValue,
'ttl': e.getElementsByTagName('ttl')[0].firstChild.nodeValue }
raise BaseException('DNS {} record for {} not found'.format(rec_type, subdomain))
def update_dns_record(rec_type, subdomain, domain, rec_id, val, ttl, api_key):
params='&domain={}&rrid={}&rrhost={}&rrvalue={}&rrttl={}'.format(domain, rec_id, '.'.join(subdomain.split('.')[:-2]), val, ttl)
response=urllib.request.urlopen(urllib.request.Request(url=(namesilo_url('dnsUpdateRecord', api_key) + params), headers=DEFAULT_HEADERS), timeout=30)
check_reply_code(minidom.parseString(response.read()), 300)
def main(rec_type, val, subdomain, domain, api_key, force, verbose):
if rec_type == 'A':
val=json.loads(urllib.request.urlopen(url='https://api.ipify.org?format=json', timeout=30).read())['ip'] (2)
if verbose:
print('Current external IP address: {}'.format(val))
current_record=get_dns_record(rec_type, subdomain, domain, api_key)
if verbose:
print('Current DNS {} record for {}: "{}"'.format(rec_type, subdomain, current_record))
if val != current_record['val'] or force:
update_dns_record(rec_type, subdomain, domain, current_record['id'], val, current_record['ttl'], api_key)
print('{} record for {} updated: "{}" -> "{}"'.format(rec_type, subdomain, current_record['val'], val))
if __name__ == '__main__':
parser=argparse.ArgumentParser()
parser.add_argument('-v', '--verbose', help='verbose output', action='store_true')
parser.add_argument('-f', '--force', help='force DNS record update (even if it contains the same value)', action='store_true')
parser.add_argument('-d', '--domain', help='fully qualified domain name for which to update a record (i.e. server.example.com)', required=True)
parser.add_argument('-a', '--action', help='action to perform: update host A record with the current external IP or update TXT record with a given value', required=True, choices=[ 'update-ip', 'update-txt' ])
parser.add_argument('-t', '--txt', help='content of the TXT record', default='')
parser.add_argument('-k', '--key', help='file name of the file containing API key', required=True)
args=parser.parse_args()
with open(args.key) as f:
api_key=f.readline().strip()
main('A' if args.action == 'update-ip' else 'TXT', args.txt, args.domain, '.'.join(args.domain.split('.')[-2:]), api_key, args.force, args.verbose)
1 | Default urllib user agent has to be overwritten since NameSilo rejects it for some reason. |
2 | This script uses https://www.ipify.org service to get public IP.
Feel free to replace it with different service if you want. |
8.1.4. Storing API Key
The API key will be stored on disk (only readable by root) and passed inside the container via mapped volume.
Create a directory that will be mapped as a volume:
sudo mkdir /root/silverbox/namesilo
Create a file /root/silverbox/namesilo/api-key
and write the NameSilo API key into it.
Assign the following permissions to the directory and file:
sudo chown root:root /root/silverbox/namesilo/api-key sudo chmod 400 /root/silverbox/namesilo/api-key sudo chmod 500 /root/silverbox/namesilo
8.1.5. Building Container Image
To build the container image run the following command:
sudo docker build -t dns-updater --network common /root/silverbox/containers/dns-updater
8.1.6. Automatic DNS Record Update
To keep DNS record updated, a Systemd timer will periodically run disposable container from the image that was just built.
Create the /etc/systemd/system/update-dns-record.service
file with the following content:
[Unit] Description=Update DNS Host A record with the current external IP address Requires=docker.service After=docker.service [Service] Type=oneshot ExecStart=/usr/bin/docker run --rm --name dns-updater --network common --cpus="1" -v /root/silverbox/namesilo:/secrets dns-updater -k /secrets/api-key -a update-ip -d {SERVER_SUBDOMAIN} (1)
1 | Replace {SERVER_SUBDOMAIN} with your actual server public FQDN. |
You can run the service once to verify that it runs successfully:
sudo systemctl daemon-reload sudo systemctl start update-dns-record.service
Next, create the /etc/systemd/system/update-dns-record.timer
file with the following content:
[Unit] Description=Update DNS Host A record with the current external IP address [Timer] OnBootSec=5min (1) OnUnitInactiveSec=30min (2) [Install] WantedBy=timers.target
1 | First time the timer runs 5 minutes after boot. |
2 | After first run, the timer will run every 30 minutes. You can adjust this value depending on how volatile your public IP is. |
Enable and start the timer:
sudo systemctl daemon-reload sudo systemctl enable update-dns-record.timer sudo systemctl start update-dns-record.timer
You can do sudo systemctl list-timers
to verify that the timer appears in the output and to check the time till next activation.
8.2. Monitoring
The status of the DNS updater Systemd service is monitored by Monit.
First, create the /usr/local/etc/monit/scripts/is_systemd_unit_failed.sh
file with the following content:
#!/bin/sh
systemctl show $1 -p ExecMainStartTimestamp --value
systemctl is-failed --quiet $1
if [ $? -eq 0 ]; then
exit 1
else
exit 0
fi
And mark it as executable:
sudo chmod u+x /usr/local/etc/monit/scripts/is_systemd_unit_failed.sh
This script checks whether the given Systemd unit had failed and prints last time it was executed.
Next, create the /etc/monit/conf.d/50-dns-updater
file with the following content:
check program dns_updater with path "/usr/local/etc/monit/scripts/is_systemd_unit_failed.sh update-dns-record.service" every 30 cycles if status != 0 for 2 cycles then alert
Restart Monit service:
sudo systemctl restart monit
Check Monit web interface and make sure that DNS updater monitoring is working.
9. NFS Server
This section describes how to install and configure NFS (version 4 only) server with Kerberos authentication (and optional encryption and integrity validation). It also provides an example configuration for an automatic NFS share mounting on the client PC.
This section assumes that DNS server and domain name have been configured as described in the DNS Server and Domain Name sections.
9.1. Domain
This section describes how to configure internal (local) domain for the LAN, that is required to configure Kerberos realm in the next step.
The internal domain will be direct subdomain of your domain {DOMAIN_NAME}
and in the document will be referred as {INTERNAL_DOMAIN}
.
For example, if your domain is example.com
, the internal domain could be home.example.com
.
Before proceeding to creating local zone for the internal domain, create an address reservations on your DHCP server for all the client devices that will be in the domain (or configure static IP addresses).
9.1.1. Configuring Local DNS Zone
To create local DNS zone for your internal domain,
edit the /etc/unbound/unbound.conf.d/dns-config.conf
file
and append the following content under the server
section:
local-zone: "{INTERNAL_DOMAIN}." static (1)
local-data: "silverbox.{INTERNAL_DOMAIN}. IN A {SERVER_IP_ADDR}" (2)
local-data: "client-pc.{INTERNAL_DOMAIN}. IN A {CLIENT_PC_IP_ADDR}" (3)
local-data-ptr: "{SERVER_IP_ADDR} silverbox.{INTERNAL_DOMAIN}" (4)
local-data-ptr: "{CLIENT_PC_IP_ADDR} client-pc.{INTERNAL_DOMAIN}" (5)
1 | Replace {INTERNAL_DOMAIN} with your internal domain name.
Dot at the end is required. |
2 | This is forward DNS record for the server. It assumes the server FQDN in the internal domain is
silverbox.{INTERNAL_DOMAIN} , but you can change it of course. |
3 | This is an example forward record for the client PC. You can add as many records as you need, for each device you have. |
4 | This is reverse DNS record for the server. |
5 | This is an example reverse record for the client PC. |
Restart the DNS server:
sudo systemctl restart unbound.service
Make sure you can resolve all records that were added using FQDNS and IP addresses. For example:
nslookup silverbox.{INTERNAL_DOMAIN} nslookup client-pc.{INTERNAL_DOMAIN} nslookup {SERVER_IP_ADDR} nslookup {CLIENT_PC_IP_ADDR}
9.1.2. Configuring Server Domain and FQDN
Since the server uses static network configuration, its domain needs to be configured manually.
To do this, edit the /etc/resolv.conf
file and add the following line:
search {INTERNAL_DOMAIN}
Verify that resolution by host name only works:
nslookup silverbox nslookup client-pc
To configure server’s FQDN edit the /etc/hosts
file and insert FQDN before host name in the record for 127.0.1.1
.
So the line for 127.0.1.1
should look something like this:
127.0.1.1 silverbox.{INTERNAL_DOMAIN} silverbox
To verify that FQDN is set correctly check the output of the hostname -f
command: it should print FQDN.
9.1.3. Configuring Client’s Domain and FQDN
If clients use DHCP, the domain can be easily configured on the DHCP server and thus it will be automatically pushed to the client devices. If static network configuration is used, then domain will have to be configured manually (the actual instructions will depend on the client OS).
The FQDN configuration will also differ depending on the client OS, but for the Ubuntu 18.04 it is identical to FQDN configuration on the server.
Once domain and FQDN configured on client PC, verify that it works (using nslookup
or similar command)
and that both client and server can resolve each other names (both short and FQDN).
9.2. Kerberos
Kerberos will be used for secure NFS authentication, and optionally integrity validation and encryption.
9.2.1. Configuring KDC
Install MIT Kerberos KDC (key distribution center):
sudo apt install krb5-kdc
During installation, you will be prompted for the following parameters:
- Realm
-
Enter fully capitalized internal domain name (
{INTERNAL_DOMAIN}
) - Kerberos servers for your realm
-
Enter the server internal FQDN (e.g.
silverbox.{INTERNAL_DOMAIN}
) - Administrative server for your Kerberos realm
-
Enter the server internal FQDN (e.g.
silverbox.{INTERNAL_DOMAIN}
)
Then, edit the /etc/krb5kdc/kdc.conf
file and add/change the following parameters:
[kdcdefaults] kdc_ports = 88 [realms] {INTERNAL_DOMAIN} = { (1) kdc_ports = 88 max_life = 24h 0m 0s max_renewable_life = 7d 0h 0m 0s (2) master_key_type = aes256-cts supported_enctypes = aes256-cts:normal aes128-cts:normal }
1 | The internal domain {INTERNAL_DOMAIN} here must be fully capitalized (since it is realm). |
2 | The max_renewable_life parameter effectively controls maximum ticket lifetime.
You can adjust this parameter if you need to. |
Next, create new Kerberos database (you’ll be prompted to create Kerberos DB master password):
sudo kdb5_util create -s
Overwrite the /etc/krb5.conf
file with the following content:
[libdefaults] default_realm = {INTERNAL_DOMAIN} (1) allow_weak_crypto = false ccache_type = 4 kdc_timesync = 1 [realms] {INTERNAL_DOMAIN} = { (2) kdc = silverbox.{INTERNAL_DOMAIN} (3) admin_server = silverbox.{INTERNAL_DOMAIN} (4) }
1 | The internal domain {INTERNAL_DOMAIN} here must be fully capitalized (since it is realm). |
2 | Same as above. |
3 | The kdc should be set to the server FQDN. |
4 | The admin_server should be set to the server FQDN. |
Start the KDC service and verify that it starts successfully:
sudo systemctl start krb5-kdc
Install the krb5-admin-server
package which is (weirdly enough) needed to use the kadmin.local
tool:
sudo apt install krb5-admin-server
Unless you are planning to use remote kadmin
, the admin service can be disabled:
sudo systemctl disable krb5-admin-server.service
Finally, to add Kerberos principal for your user run sudo kadmin.local
and then type:
addprinc {SERVER_USER} (1)
1 | Replace {SERVER_USER} with your actual user name. |
9.2.2. Adding Firewall Rules
To allow access to the KDC from the LAN, add the firewall rules:
sudo ufw allow proto tcp from {SERVER_SUBNET} to any port 88 comment 'Kerberos TCP' (1) sudo ufw allow proto udp from {SERVER_SUBNET} to any port 88 comment 'Kerberos UDP' (2)
1 | Replace {SERVER_SUBNET} with the actual LAN subnet. |
2 | Same as above. |
9.2.3. Configuring Kerberos on the Client
First, install the MIT Kerberos user package:
sudo apt install krb5-user
When prompted, set the same parameters as described in the Configuring KDC section.
Next, overwrite the /etc/krb5.conf
file with the same content as described in the
Configuring KDC section.
Verify that Kerberos authentication works by doing kinit
as your user,
and typing your principal’s password (same password as was used during principal creation).
The kinit
command should succeed and no error message should appear.
Do klist
to see the ticket, and then kdestroy
to destroy it.
9.3. NFS Server Configuration
This section describes how to install and configure NFSv4 (and only version 4) server to share files in the LAN.
9.3.1. Installing NFS Server
To install the NFS server do:
sudo apt install nfs-kernel-server
9.3.2. Preparing NFS Share
First, create a new group {NFS_SHARE_GROUP}
that will be used to control the access to the NFS share contents.
This is of course just one way to manage the share access and you don’t have to do it exactly this way.
sudo groupadd {NFS_SHARE_GROUP}
Members of this group will be able to access NFS share directory (and subdirectories, where permitted) on the server.
Add your user to this group:
sudo usermod -a -G {NFS_SHARE_GROUP} {SERVER_USER}
You’ll need to re-login or start a new session to apply group membership. |
Create the /srv/nfs
directory, set its group ownership to {NFS_SHARE_GROUP}
and set following access mask:
sudo mkdir -p /srv/nfs sudo chown root:{NFS_SHARE_GROUP} /srv/nfs sudo chmod 770 /srv/nfs
Contents of the /srv/nfs
directory will be shared using NFS server.
To share other directories you can bind-mount them under the /srv/nfs
.
Next, create the /etc/exports
file with the following content:
/srv/nfs {SERVER_SUBNET}(rw,sync,crossmnt,no_subtree_check,root_squash,fsid=0,sec=krb5) (1)
1 | Replace {SERVER_SUBNET} with the actual LAN subnet. |
This configuration uses sec=krb5 parameter, which will use Kerberos for authentication only.
Other possible options are: sec=krb5i for authentication and integrity validation,
sec=krb5p for authentication, integrity validation and encryption.
However, encryption may add performance penalty and may be unnecessary in certain scenarios.
|
9.3.3. Fixing nfsdcltrack
Issue
Restart the NFS server:
sudo systemctl restart nfs-server
Most likely you’ll find the following message in the /var/log/syslog
:
nfsdcltrack Failed to init database: -13
This is due to the bug in the nfsdcltrack
which will hopefully be fixed in the future.
The workaround is to initialize nfsdcltrack
database manually:
sudo mkdir -p /var/lib/nfs/nfsdcltrack sudo nfsdcltrack init
Restart the NFS server again and make sure the error is now gone from the logs.
9.3.4. Configuring NFS Protocol Versions
First, check the output of:
sudo cat /proc/fs/nfsd/versions
Most likely it will be: -2 +3 +4 +4.1 +4.2
.
This shows while NFSv4 is enabled, NFSv3 is enabled as well.
To disable all NFS versions except 4 (and also to enable svcgssd
needed for Kerberos),
edit the /etc/default/nfs-kernel-server
file and change/add the following lines:
RPCMOUNTDOPTS="--manage-gids -N2 -N3 -V4 --no-udp" NEED_SVCGSSD="yes" RPCNFSDOPTS="-N2 -N3 -V4 --no-udp"
Restart the NFS server with sudo systemctl restart nfs-server
and check the output of sudo cat /proc/fs/nfsd/versions
again.
Now it should be -2 -3 +4 +4.1 +4.2
indicating that only NFSv4 is now enabled.
9.3.5. Disabling Unused Services
The rpcbind
and rpc-gssd
services will be running (and even listening on some ports),
even though they are not needed for pure NFSv4 server.
To disable them, run the following commands:
sudo systemctl stop {rpcbind,rpc-gssd}.service rpcbind.socket sudo systemctl disable {rpcbind,rpc-gssd}.service rpcbind.socket sudo systemctl mask {rpcbind,rpc-gssd}.service rpcbind.socket
Reboot the system and verify that rpcbind
and rpc-gssd
are not running.
9.3.6. Checking for Listening Ports
At this point all legacy services (services not related to NFSv4) should be disabled and only NFS kernel server should be listening only on TCP port 2049. Verify this by checking the output of:
sudo netstat -lnptu
Most likely process name won’t be shown for the NFS server as socket is opened from the kernel. |
9.3.7. Adding Firewall Rule
Add firewall rule to allow NFS traffic from the LAN:
sudo ufw allow proto tcp from {SERVER_SUBNET} to any port 2049 comment 'NFSv4 Server' (1)
1 | Replace {SERVER_SUBNET} with the actual LAN subnet. |
9.3.8. Enabling User ID Mapping
It is not clear whether these steps are really required or not, as it seems like the ID translation works even without these module parameters. |
Create the /etc/modprobe.d/nfsd.conf
file with the following content:
options nfsd nfs4_disable_idmapping=0
Reboot the system, and verify that ID mapping is not disabled by executing:
cat /sys/module/nfsd/parameters/nfs4_disable_idmapping
Which should return N
.
9.3.9. Creating Kerberos Principal
Run sudo kadmin.local
and add NFS service principal:
addprinc -randkey nfs/silverbox.{INTERNAL_DOMAIN} (1) ktadd nfs/silverbox.{INTERNAL_DOMAIN} (2)
1 | Replace {INTERNAL_DOMAIN} with the actual internal domain name. |
2 | Same as above. |
This will create keytab file (in the default location /etc/krb5.keytab
)
containing principal’s key and add principal to the Kerberos database.
Creation of default keytab file should trigger rpc-svcgssd
service.
Reboot the server and verify that rpc-svcgssd
service is now automatically started (is enabled and active):
sudo systemctl status rpc-svcgssd.service
9.4. NFS Client Configuration
This section describes how to configure NFS client on the client PC (instructions are for Ubuntu 18.04 Desktop).
9.4.1. Installing NFS Client
To install NFS client do:
sudo apt install nfs-common
9.4.2. Enabling User ID Mapping
It is not clear whether these steps are really required or not, as it seems like the ID translation works even without these module parameters. |
Create the /etc/modprobe.d/nfs.conf
file with the following content:
options nfs nfs4_disable_idmapping=0
Reboot the system, and verify that ID mapping is not disabled by executing:
sudo modprobe nfs cat /sys/module/nfs/parameters/nfs4_disable_idmapping
Which should return N
.
9.4.3. Creating Kerberos Principal
First, add Kerberos principal on the server for the client machine, and save it to a separate keytab file (do these commands on the server):
sudo kadmin.local addprinc -randkey nfs/client-pc.{INTERNAL_DOMAIN} (1) ktadd -k /root/krb5.keytab nfs/client-pc.{INTERNAL_DOMAIN} (2)
1 | Replace client-pc.{INTERNAL_DOMAIN} with your client PC FQDN. |
2 | Same as above. |
Then move the /root/krb5.keytab
file to the client PC to /etc/krb5.keytab
.
On the client PC, assign proper ownership and permissions to the keytab file:
sudo chown root:root /etc/krb5.keytab sudo chmod 600 /etc/krb5.keytab
Next, on the client PC edit the /etc/default/nfs-common
file and change/add the following lines:
NEED_GSSD="yes"
Reboot the client PC and verify that the rpc-gssd
service is now running:
sudo systemctl status rpc-gssd.service
9.4.4. Disabling Unused Services
The rpcbind
service will be running (and listening on some ports),
even though it is not needed for NFSv4.
To disable it do:
sudo systemctl stop rpcbind.service rpcbind.socket sudo systemctl disable rpcbind.service rpcbind.socket sudo systemctl mask rpcbind.service rpcbind.socket
Reboot the system and verify that the rpcbind
service is not running anymore.
9.4.5. Creating NFS Share Group
Create the same {NFS_SHARE_GROUP}
group on the client machine, and add your user into it:
sudo groupadd {NFS_SHARE_GROUP} sudo usermod -a -G {NFS_SHARE_GROUP} {SERVER_USER}
There is no need to synchronize UID/GID for the user and group between server and client, because ID mapping process in NFSv4 is name based, rather than UID/GID based. It is important to make sure that the user and group have the same names on server and client. |
9.4.6. Performing Test Mount
Create the /mnt/nfs
directory:
sudo mkdir /mnt/nfs
To test that everything works, perform a test NFS mount with:
sudo mount -t nfs4 -o proto=tcp,port=2049,sec=krb5 silverbox.{INTERNAL_DOMAIN}:/ /mnt/nfs -vvvv
The output should look something like this:
mount.nfs4: timeout set for ... mount.nfs4: trying text-based options 'proto=tcp,port=2049,sec=krb5,vers=4.2,addr=xxx.xxx.xxx.xxx,clientaddr=xxx.xxx.xxx.xxx'
Try accessing the mount as root user by doing sudo ls /mnt/nfs
.
You should see Permission denied
message as the root
user is mapped to nobody:nogroup
and don’t have permissions to access the share.
Now try accessing the share as your user by doing ls /mnt/nfs
.
You should see either Permission denied
or Stale file handle
message, because your user don’t have Kerberos ticket.
Finally, do kinit
to obtain the Kerberos ticket for your user and try accessing share again. It should work now.
It is worth at this point to do some testing by creating files to make sure ID mapping works properly (both ways) and user/group ownership is assigned correctly. |
9.4.7. Automatic NFS Share Mount
This section describes how to setup automatic mounting of the NFS share on your user login (without any interaction).
The first step is to configure an automatic mount of the NFS share on boot.
To do this, append the following line to the /etc/fstab
file (on the client PC):
silverbox.{INTERNAL_DOMAIN}:/ /mnt/nfs nfs4 proto=tcp,port=2049,sec=krb5,lazytime,auto,_netdev,x-gvfs-show 0 0 (1)
1 | Replace silverbox.{INTERNAL_DOMAIN} with the actual server FQDN. |
The x-gvfs-show option will make NFS share to appear in Nautilus file manager panel automatically.
If you are not using Nautilus you can remove this option.
|
If you prefer NFS share to be mounted only on first access, change auto parameter to noauto
and add x-systemd.automount parameter (for additional options, refer to systemd.mount documentation).
|
With this change in the fstab
, the share will be mounted on boot using credentials from the /etc/krb5.keytab
file.
However, since this keytab only contains machine key, it won’t allow any access to the content of the share.
The next step is to export your user’s Kerberos key into a separate keytab file,
and create a user Systemd service which will do kinit
for your user automatically on login.
Since this kinit
will use key from the user’s keytab file, no interaction (such as entering password)
will be required.
Another (and, perhaps, a more convenient) way to automatically do Kerberos authentication on login
is to use pam-krb5 PAM module.
If your Kerberos principal has the same password as your local user, you can install pam-krb5
and add the following line (after line for regular auth) to appropriate configuration file
under /etc/pam.d (depends on the distribution): auth optional pam_krb5.so minimum_uid=1000 use_first_pass .
|
To export your principal’s key, run the following commands on the server:
sudo kadmin.local ktadd -k /root/krb5.keytab {SERVER_USER}
Move the /root/krb5.keytab
file from the server to the client PC, for example under your users home .config
directory: ~/.config/krb5.keytab
.
It is important to have either full disk encryption or at least user’s home directory encryption, since the Kerberos principal key will be stored on disk. |
Change permission on this file so that only your user can read it:
chown {SERVER_USER}:{SERVER_USER} ~/.config/krb5.keytab chmod 400 ~/.config/krb5.keytab
Create directory (on the client PC) for user Systemd services, if it doesn’t exist yet:
mkdir -p ~/.local/share/systemd/user/
Inside this directory, create nfs-kinit.service
file with the following content:
[Unit] Description=Perform kinit automatically [Service] Type=oneshot ExecStart=/bin/bash -c "kinit -r 7d -k -t ~/.config/krb5.keytab $USER" (1) [Install] WantedBy=default.target
1 | Replace 7d with the value of the max_renewable_life option that you set in the kdc.conf file on the server. |
Enable this service, so it will start automatically on login:
systemctl --user daemon-reload systemctl --user enable kinit.service
Reboot the system and verify that you can access the content of the NFS share.
Since the service only started on login, if the user session will last longer than max_renewable_life ,
the Kerberos ticket will eventually expire.
If you planning on having long user sessions, you can either increase max_renewable_life or make this service
run periodically to obtain a new ticket before old one expires.
|
If user’s home directory is encrypted, the Systemd service won’t start on login.
It appears that user Systemd services are scanned before home directory is mounted and thus Systemd won’t see
the service.
The only workaround I found for this is to add systemctl --user daemon-reload and
systemctl --user start kinit.service commands to the script that runs after user login
(it will depend on your system, but in Gnome it can be set with “Startup Applications”).
|
10. Transmission
This section describes hot to setup Transmission [10] BitTorrent client on the server. Similarly to how SOCKS5 proxy server was deployed (as described in the SOCKS5 Over VPN section), the Transmission will be running inside a Docker container together with the OpenVPN, such that all Transmission traffic will be tunneled though the VPN. Transmission will be managed using web user interface, that will be exposed outside of the container.
This section assumes that SOCKS5 Over VPN and NFS Server sections were completed.
10.1. Image
This section describes how to prepare and build Docker image for the Transmission.
10.1.1. Preparing OpenVPN Profiles
Create a transmission
directory under the /root/silverbox/vpn
directory:
sudo mkdir /root/silverbox/vpn/transmission sudo chmod 700 /root/silverbox/vpn/transmission
And copy all OpenVPN profile files (*.ovpn
) that you want to use for Transmission client into it
(you can have multiple files as exit point rotation will be configured).
This way you can use different OpenVPN profiles (and thus exit points) for SOCKS5 proxy and for Transmission.
At any time, if you want only to use specific profile, create a symlink named profile pointing to desired
profile. For example ln -s SOME_PROFILE.ovpn profile .
Make sure the link is relative and not absolute: otherwise it won’t work inside the container.
|
10.1.2. Preparing Directory Structure
In this configuration, Transmission will use directories on the NFS share to read torrent files and to store downloaded files. These directories need to be created in advance.
As your user, create directories for torrent files and downloads in the NFS directory and set proper permissions/ownership:
mkdir -p /srv/nfs/torrents/torrentfiles (1) mkdir -p /srv/nfs/torrents/downloads (2) chown {SERVER_USER}:{NFS_SHARE_GROUP} -R /srv/nfs/torrents chmod 770 -R /srv/nfs/torrents
1 | This directory will be automatically watched for the *.torrent files
and once new file appear download will be started automatically (torrent file will be removed once added to the queue). |
2 | Finished and in-progress downloads will be placed into this directory. |
Of course, you can use different directory structure and names, this is just an example. |
10.1.3. Preparing Image Files
Create a directory for the Transmission container:
sudo mkdir /root/silverbox/containers/transmission sudo chmod 700 /root/silverbox/containers/transmission
Inside the transmission
directory, create a file named docker-compose.yaml
with the following content:
version: '3.8'
networks:
default:
name: vpn
external: true
services:
transmission:
container_name: transmission
init: true
build:
context: /root/silverbox/containers/transmission
args:
version: '11.5-slim' (1)
restart: on-failure:15
logging:
driver: json-file
options:
max-size: 10mb
ports:
- 127.0.0.1:{TRANSMISSION_UI_PORT}:{TRANSMISSION_UI_PORT}/tcp (2)
networks:
default:
ipv4_address: {TRANSMISSION_ADDR} (3)
devices:
- /dev/net/tun
cap_add:
- NET_ADMIN
volumes:
- /srv/nfs/torrents:/data
- /root/silverbox/vpn/transmission:/vpn-profiles
- /root/silverbox/vpn/auth:/vpn-credentials
1 | Replace 11.5-slim with the actual latest debian image version (can be checked at the Docker Hub).
Note that this is not the Transmission version. |
2 | Replace {TRANSMISSION_UI_PORT} with the actual port number. |
3 | Replace {TRANSMISSION_ADDR} with the actual address. |
Next, create a file named Dockerfile
with the following content:
ARG version=latest
FROM debian:$version
ARG UID={USER_ID} (1)
ARG GID={GROUP_ID} (2)
RUN apt-get update && \
apt-get install -y --no-install-recommends bash iputils-ping iptables openvpn transmission-daemon && \
addgroup --system vpn && \
addgroup --gid ${GID} transmission && \
adduser transmission --uid ${UID} --gid ${GID} --disabled-login --no-create-home --gecos "" --shell /usr/sbin/nologin && \
mkdir /config && chown transmission:transmission /config
COPY docker-entrypoint.sh /usr/local/bin/
COPY start-transmission.sh /usr/local/bin/
COPY --chown=transmission:transmission settings.json /config/
HEALTHCHECK --interval=120s --timeout=20s --start-period=120s CMD ping 1.1.1.1 -c 1 (3)
VOLUME ["/vpn-profiles", "/vpn-credentials", "/data"]
EXPOSE {TRANSMISSION_UI_PORT}/tcp (4)
ENTRYPOINT [ "/usr/local/bin/docker-entrypoint.sh" ]
1 | Replace {USER_ID} with the UID of your {SERVER_USER} user on the server. |
2 | Replace {GROUP_ID} with the GID of your {NFS_SHARE_GROUP} group on the server. |
3 | Feel free to customize health check command. |
4 | Replace {TRANSMISSION_UI_PORT} with some port number of your choice (e.g. 12345).
The container will expose this port to access the Transmission web UI. |
Create the start-transmission.sh
file with the following content:
#!/bin/bash
echo "-- Preparing to start transmission-daemon..."
if [ -f /config/transmission.log ]; then
echo "-- Cleaning log file..."
tail -c 10000000 /config/transmission.log > /config/transmission.log.trunc
mv /config/transmission.log.trunc /config/transmission.log
chown transmission:transmission /config/transmission.log
fi
TUN_IP=$(ip address show dev tun0 | awk '/inet/{ split($2, a, "/"); print a[1] }')
if [ -z "$TUN_IP" ]; then
echo "-- Failed to get tun0 IP address"
exit 1
else
echo "-- tun0 address: [$TUN_IP]. Updating config file."
sed -i "s/\(\"bind-address-ipv4\":\)\s\+\".\+\",/\1 \"$TUN_IP\",/" /config/settings.json
echo "-- Starting transmission-daemon..."
su transmission -s /bin/bash -c "transmission-daemon -g /config --logfile /config/transmission.log"
if [ $? -ne 0 ]; then
echo "-- Failed to start transmission"
exit 1
else
echo "-- Transmission started"
fi
fi
And mark it as executable:
sudo chmod a+x start-transmission.sh
Create the docker-entrypoint.sh
file with the following content:
#!/usr/bin/env bash
function configure_iptables()
{
set -e
local config_file="$1"
local host=$(awk '/^remote / {print $2}' "$config_file")
local port=$(awk '/^remote / && NF ~ /^[0-9]*$/ {print $NF}' "$config_file")
if [ -z "$port" ]; then
echo "-- No port number specified in the VPN profile file"
exit 1
else
echo "-- Setting up firewall rules for VPN server $host on port $port"
fi
iptables --flush
iptables --delete-chain
iptables --policy INPUT DROP
iptables --policy OUTPUT DROP
iptables --policy FORWARD DROP
iptables -A INPUT -i lo -j ACCEPT
iptables -A INPUT -i tun0 -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
iptables -A INPUT -i eth0 -p tcp -s $host --sport $port -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
iptables -A INPUT -i eth0 -p tcp -s {VPN_NETWORK_GW} --dport {TRANSMISSION_UI_PORT} -j ACCEPT (1)
iptables -A OUTPUT -o lo -j ACCEPT
iptables -A OUTPUT -o tun0 -j ACCEPT
iptables -A OUTPUT -o eth0 -d {VPN_NETWORK_GW} -p tcp --sport {TRANSMISSION_UI_PORT} -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT (2)
iptables -A OUTPUT -o eth0 -p tcp -d $host --dport $port -m owner --gid-owner vpn -j ACCEPT
set +e
}
if [[ $# -ge 1 ]]; then
exec "$@"
else
if [ -f /vpn-profiles/profile ]; then
echo "-- Profile file found: only it will be used"
PROFILE_FILE="/vpn-profiles/profile"
else
echo "-- Profile file not found: random profile file will be picked"
PROFILE_FILE="$(ls -1 /vpn-profiles/*.ovpn | shuf -n 1)"
echo "-- Selected profile file: $PROFILE_FILE"
fi
configure_iptables "$PROFILE_FILE"
exec sg vpn -c "openvpn --config $PROFILE_FILE --verb 1 --auth-user-pass /vpn-credentials/credentials --auth-nocache --script-security 2 --route-up /usr/local/bin/start-transmission.sh"
fi
1 | Replace {VPN_NETWORK_GW} with the default gateway address of your VPN Docker network {DOCKER_VPN_NETWORK} .
For example, if your {DOCKER_VPN_NETWORK} is 172.18.0.0/24 the gateway address would be 172.18.0.1 .
Also, replace {TRANSMISSION_UI_PORT} with the actual port number. |
2 | Same as above. |
Mark this script as executable:
sudo chmod a+x docker-entrypoint.sh
Finally, create the settings.json
file with the following content:
{
"rpc-enabled": true,
"rpc-bind-address": "{TRANSMISSION_ADDR}", (1)
"rpc-port": {TRANSMISSION_UI_PORT}, (2)
"rpc-whitelist": "{VPN_NETWORK_GW}", (3)
"rpc-whitelist-enabled": true,
"alt-speed-enabled": false,
"speed-limit-down-enabled": true,
"speed-limit-down": 2500,
"speed-limit-up-enabled": true,
"speed-limit-up": 200,
"blocklist-enabled": false,
"download-dir": "/data/downloads",
"incomplete-dir-enabled": false,
"rename-partial-files": true,
"start-added-torrents": true,
"trash-original-torrent-files": true,
"watch-dir-enabled": true,
"watch-dir": "/data/torrentfiles",
"umask": 7,
"cache-size-mb": 16,
"prefetch-enabled": true,
"encryption": 1,
"message-level": 2,
"dht-enabled": true,
"lpd-enabled": false,
"pex-enabled": true,
"utp-enabled": false,
"bind-address-ipv4": "127.0.0.1",
"peer-limit-global": 100,
"peer-limit-per-torrent": 40,
"peer-congestion-algorithm": "",
"peer-port": 51413,
"peer-port-random-on-start": false,
"port-forwarding-enabled": false,
"download-queue-enabled": true,
"download-queue-size": 5,
"queue-stalled-enabled": true,
"queue-stalled-minutes": 30,
"seed-queue-enabled": true,
"seed-queue-size": 3,
"alt-speed-time-enabled": false,
"idle-seeding-limit-enabled": false,
"ratio-limit-enabled": true,
"ratio-limit": 1.2
}
1 | Replace {TRANSMISSION_ADDR} with some address from the {DOCKER_VPN_NETWORK} that Transmission container
will be running on. |
2 | Replace {TRANSMISSION_UI_PORT} with the actual port number.
Make sure this is number and not string in quotes. |
3 | Replace {VPN_NETWORK_GW} with the actual address. |
Feel free to adjust the values in the settings.json file according to your needs.
|
10.2. Container
This section describes how to run the container from the Transmission image.
10.2.1. Running Container
To build the image and run the container do:
sudo docker compose -f /root/silverbox/containers/transmission/docker-compose.yaml up -d
10.2.2. Automatic Container Startup
To start container automatically on boot create the /etc/systemd/system/transmission-start.service
file
with the following content:
[Unit] Description=Start Transmission container Requires=docker.service After=docker.service [Service] Type=oneshot ExecStart=/usr/bin/docker start transmission [Install] WantedBy=multi-user.target
Enable the service, so that it will be started on system boot:
sudo systemctl daemon-reload sudo systemctl enable transmission-start.service
10.2.3. Automatic VPN Server Rotation
VPN server rotation is configured in the similar way to SOCKS5 proxy VPN server rotation.
Create the /etc/systemd/system/transmission-restart.service
file with the following content:
[Unit] Description=Restart Transmission container Requires=docker.service After=docker.service [Service] Type=oneshot ExecStart=/usr/bin/docker restart transmission
Create the /etc/systemd/system/transmission-restart.timer
file with the following content:
[Unit] Description=Restart Transmission container [Timer] OnCalendar=Fri *-*-* 01:00:00 (1) AccuracySec=1h Persistent=true [Install] WantedBy=timers.target
1 | In this configuration timer is activated every Friday at 1am. Feel free to adjust this. |
Enable and start the timer:
sudo systemctl daemon-reload sudo systemctl enable transmission-restart.timer sudo systemctl start transmission-restart.timer
You can do systemctl list-timers
to verify that the timer appears in the output
and to check the time till next activation.
10.3. Monitoring
The status of the Transmission container is monitored by Monit.
Create the /etc/monit/conf.d/60-docker-transmission
file with the following content:
check program transmission with path "/usr/local/etc/monit/scripts/container_status.sh transmission .State.Health.Status healthy" if status != 0 for 10 cycles then alert
Restart Monit service:
sudo systemctl restart monit
Check Monit web interface and make sure that Transmission monitoring is working.
10.4. User Interface
Transmission has web based user interface that is exposed outside of the Docker container. However, it is exposed on the localhost only, so it is not accessible from the outside.
To access Transmission web UI securely first SSH tunnel needs to be created (similarly to how Monit UI is accessed). For example, from the client PC establish a SSH tunnel:
ssh {SERVER_USER}@{SERVER_IP_ADDR} -N -L 127.0.0.1:{LOCAL_PORT}:127.0.0.1:{TRANSMISSION_UI_PORT}
Here {LOCAL_PORT}
is port on which SSH will be listening on the client PC.
Web interface now can be accessed on the client pc at http://127.0.0.1:{LOCAL_PORT}
.
To create this tunnel in more convenient way, you can add the following entry to your SSH config file ~/.ssh/config
:
host silverbox-transmission-ui-tunnel HostName {SERVER_IP_ADDR} (1) IdentityFile ~/.ssh/silverbox-key LocalForward 127.0.0.1:{LOCAL_PORT} 127.0.0.1:{TRANSMISSION_UI_PORT}
1 | IP can be replaced with the server FQDN. |
Now the tunnel can be established simply with:
ssh -N silverbox-transmission-ui-tunnel
More convenient way of accessing Transmission web interface and other internal services is described in the Reverse Proxy section. |
11. Nextcloud
This section describes how to install and configure Nextcloud [11] on the server, in such a way that it can be accessed from the internet securely over HTTPS.
This section depends on the following sections: Docker, Domain Name, NFS Server.
11.1. Overview
The Nextcloud will be deployed using Docker (more specifically - Docker Compose). Below is a diagram that shows high level overview of the Nextcloud deployment:
Nextcloud Docker Network ---------------------------------------------------------------------------------- | ------------ ----------- ------------ | | HTTPS | Apache | 9000/tcp | Nextcloud | 5432/tcp | | | ------------->| Web Server |----------->| PHP |----------->| PostgreSQL | | | _____| (httpd) | | FPM | | | | | | ------------ ----------- ------------ | | | | | | | | | | | {/usr/local/apache2/htdocs} | | | | | | | | {/var/www/html} | | {/var/lib/postgresql/data} | | {/certs} | | | | | | | | v v _____| | v | | | /srv/nextcloud/html | {/data} /srv/nextcloud/db | | v | | | | /etc/letsencrypt {/nfs/*} v | | | /srv/nextcloud/data | | v | | /srv/nfs/* | ----------------------------------------------------------------------------------
In the diagram above, a path inside curly braces indicates a path as it seen inside Docker container, while path without curly braces indicates the real path on the host file system. |
As the diagram shows, the only external entry point to the Nextcloud system is over HTTPS via the container with Apache Web Server.
All Nextcloud services (web interface, WebDAV, CalDAV, CardDAV, Sync app) work over HTTPS. |
HTTP requests are handled in the following way:
-
If this is a request to a PHP file:
-
The request is proxied to the Nextcloud PHP FPM container using
mod_proxy_fcgi
module.
-
-
Otherwise:
-
The request is served directly by the Apache Web Server (statically).
-
All containers are stateless (i.e. don’t contain any important data), since all user data is stored on the host file system and mounted inside containers. This way containers can be safely deleted and re-deployed, which makes upgrades very easy.
Having three separate containers (instead of just one big container) allows for stopping, restarting and upgrading containers independently, which is useful in many cases. It also allows every container to have its own logs and logs configuration. But more importantly, compromising or DoS-ing one container doesn’t compromise the whole system.
11.2. Certificate
This section describes how to obtain and maintain publicly trusted SSL/TLS Certificate that will be used to setup HTTPS for the Nextcloud.
The certificate will be issued by Let’s Encrypt [12] and will be obtained and renewed using Certbot [13].
The certificate needs to be obtained before Nextcloud installation, since Nextcloud will only be reachable via HTTPS (not plain HTTP) and the web server won’t start without certificate.
ACME DNS challenge is used for domain validation (needed to issue the initial certificate and for subsequent renewals). One major advantage DNS challenge has over more widely used HTTP challenge is that it doesn’t require your web server to use standard ports: 80 and 443. This allows to host Nextcloud on non-standard port which can be advantageous for two reasons: using non-standard port can dramatically reduce amount of unwanted requests from bots, and it may be the only option if your ISP blocks standard ports 80 and 443.
11.2.1. Installing Certbot
Certbot can be installed from the PPA maintained by the EFF team (the installation is described in more details in Certbot documentation [3]):
sudo add-apt-repository ppa:certbot/certbot sudo apt install certbot
11.2.2. Preparing Domain Name
While its possible to get certificate for the {SERVER_SUBDOMAIN}
domain and use it to access the Nextcloud,
it may be better to use a subdomain, like for example nextcloud.silverbox.example.com
.
This offers some extra flexibility and you can take advantage of the same origin policy.
In this document, the subdomain for Nextcloud is referred as {NEXTCLOUD_DOMAIN}
.
The first step is to create a CNAME DNS record to point {NEXTCLOUD_DOMAIN}
to {SERVER_SUBDOMAIN}
.
Thus, you won’t have to also dynamically update host A record for the {NEXTCLOUD_DOMAIN}
,
as it is already updated for the {SERVER_SUBDOMAIN}
.
The second step is to create a TXT DNS record for the _acme-challenge.{NEXTCLOUD_DOMAIN}
with any value
(the value is just a placeholder and will be replaced with the actual DNS ACME challenge during
domain validation).
11.2.3. Preparing Certbot Scripts
To confirm ownership of the domain (for certificate generation and renewal) a DNS challenge will be used. The reason is that HTTP(S) challenge is way to restrictive, in particular, it forces using the standard 80 and 443 ports which is not always desirable. DNS challenge, however, only requires you to be able to create a TXT record with a given content, without enforcing any specific port numbers.
At the moment of writing, Certbot did not support Namesilo API, which is why DNS challenge is done using manual hooks.
Create a directory where all Certbot related scripts will reside:
sudo mkdir /root/silverbox/certbot sudo chmod 700 /root/silverbox/certbot
Inside it, create the dns-challenge-auth.sh
file with the following content:
#!/bin/bash
ACME_SUBDOMAIN="_acme-challenge"
DOMAIN=`awk -F '.' '{print $(NF-1)"."$NF}' <<< "$CERTBOT_DOMAIN"` || exit 1
NS=`dig "$DOMAIN" NS +short | head -1` || exit 2
echo "Performing DNS Challenge for domain: $CERTBOT_DOMAIN in $DOMAIN with authoritative NS $NS"
docker run --rm --network common --cpus="1" -v /root/silverbox/namesilo:/secrets dns-updater -k /secrets/api-key -a update-txt -d "$ACME_SUBDOMAIN.$CERTBOT_DOMAIN" -t "$CERTBOT_VALIDATION" || exit 3
for i in {1..20}; do (1)
echo "Checking if DNS updated, attempt $i..."
TXT=`dig "@$NS" "$ACME_SUBDOMAIN.$CERTBOT_DOMAIN" TXT +short | sed 's/"//g'`
if [ "$TXT" == "$CERTBOT_VALIDATION" ]; then
echo "Record updated. Waiting extra minute before returning."
sleep 60 (2)
exit 0
else
echo "Record still contains '$TXT'. Waiting for 1 minute..."
sleep 60 (3)
fi
done
exit 4
1 | 20 is the number of attempts to check if the TXT record has been updated. |
2 | Without this extra wait sometime Certbot won’t pick up the updated TXT value. |
3 | How long to wait between attempts. |
Next, create the dns-challenge-cleanup.sh
file with the following content:
#!/bin/bash
ACME_SUBDOMAIN="_acme-challenge"
DOMAIN=`awk -F '.' '{print $(NF-1)"."$NF}' <<< "$CERTBOT_DOMAIN"` || exit 1
NS=`dig "$DOMAIN" NS +short | head -1` || exit 2
echo "Performing DNS Challenge Cleanup for domain: $CERTBOT_DOMAIN in $DOMAIN with authoritative NS $NS"
docker run --rm --network common --cpus="1" -v /root/silverbox/namesilo:/secrets dns-updater -k /secrets/api-key -a update-txt -d "$ACME_SUBDOMAIN.$CERTBOT_DOMAIN" -t "none" || exit 3 (1)
echo "Record cleaned up"
1 | In this example, none value is used as the new TXT record value (since empty value is not allowed). |
Assign the following permissions to these files:
sudo chmod 770 /root/silverbox/certbot/dns-challenge-auth.sh sudo chmod 770 /root/silverbox/certbot/dns-challenge-cleanup.sh
The next step is to create renewal hooks that will stop Apache web server before renewal and start it once new certificate is obtained.
To create directories for the renewal hooks run:
sudo certbot certificates
Create the /etc/letsencrypt/renewal-hooks/post/nextcloud-web-restart.sh
file with the following content:
#!/bin/bash
if [ "$CERTBOT_DOMAIN" = "{NEXTCLOUD_DOMAIN}" ]; then (1)
echo "Restarting Nextcloud web server"
docker compose -f /root/silverbox/containers/nextcloud/docker-compose.yml restart nextcloud-web
else
echo "Skipping Nextcloud web server restart - different domain: $CERTBOT_DOMAIN"
fi
1 | Replace {NEXTCLOUD_DOMAIN} with the actual domain name. |
This script will be executed automatically by Certbot during the renewal of a certificate, and if the renewal is for the Nextcloud certificate it will restart Nextcloud’s web server to use the new certificate.
And assign the following permissions to this file:
sudo chmod 770 /etc/letsencrypt/renewal-hooks/post/nextcloud-web-restart.sh
Finally, install the dnsutils
package which contains dig
tool:
sudo apt install dnsutils
11.2.4. Test Certificate
To test that domain validation and certificate renewal works, it is possible to use Let’s Encrypt test server to generate test (not trusted) certificate.
To get test certificate run:
sudo certbot certonly --test-cert \ --agree-tos \ -m {YOUR_EMAIL_ADDR} \ (1) --manual \ --preferred-challenges=dns \ --manual-auth-hook /root/silverbox/certbot/dns-challenge-auth.sh \ --manual-cleanup-hook /root/silverbox/certbot/dns-challenge-cleanup.sh \ --must-staple \ -d {NEXTCLOUD_DOMAIN} (2)
1 | Replace {YOUR_EMAIL_ADDR} with the email address you wish to use for certificate generation. |
2 | Replace {NEXTCLOUD_DOMAIN} with the actual domain name. |
This may take a while. |
To view information about the generated certificate:
sudo certbot certificates
To test certificate renewal:
sudo certbot renew --test-cert --dry-run --cert-name {NEXTCLOUD_DOMAIN}
To revoke and delete the test certificate:
sudo certbot revoke --test-cert --cert-name {NEXTCLOUD_DOMAIN}
11.2.5. Getting Real Certificate
To get the real certificate run:
sudo certbot certonly \ --agree-tos \ -m {YOUR_EMAIL_ADDR} \ --manual \ --preferred-challenges=dns \ --manual-auth-hook /root/silverbox/certbot/dns-challenge-auth.sh \ --manual-cleanup-hook /root/silverbox/certbot/dns-challenge-cleanup.sh \ --must-staple \ -d {NEXTCLOUD_DOMAIN}
11.2.6. Automatic Certificate Renewal
The certificate should be automatically renewed by the Certbot’s Systemd service. The service should run automatically triggered by the corresponding timer. To check the status of the timer:
systemctl status certbot.timer
11.3. Installation
This section describes how to install and run Nextcloud.
11.3.1. Preparing Directory Structure
The very first step is to create directories that will be mapped inside Nextcloud Docker containers:
sudo mkdir -p /srv/nextcloud/db sudo mkdir /srv/nextcloud/html sudo mkdir /srv/nextcloud/data sudo chown 33 /srv/nextcloud/data (1) sudo chmod 750 -R /srv/nextcloud
1 | 33 is the UID of the www-data user (inside Apache httpd and Nextcloud FPM containers). |
All the Nextcloud data (including database, static files and user data)
will be stored under the /srv/nextcloud
directory in the following way:
- db
-
The
db
subdirectory will store PostgreSQL database files. - html
-
The
html
subdirectory will store static HTML/PHP files. - data
-
The
data
subdirectory will store user’s data.
It is important to keep these directories owned by the root and have restrictive permissions
since content inside them will be owned by the www-data (UID 33 ) user from the Docker containers.
|
11.3.2. Preparing Images
Create a directory for Nextcloud containers files:
sudo mkdir /root/silverbox/containers/nextcloud sudo chmod 700 /root/silverbox/containers/nextcloud
Inside it, create the docker-compose.yml
file with the following content:
version: '3.8'
networks:
default:
name: nextcloud
driver: bridge
ipam:
config:
- subnet: {DOCKER_NEXTCLOUD_NETWORK} (1)
services:
nextcloud-db:
container_name: nextcloud-db
image: postgres:14.5 (2)
restart: on-failure:5
shm_size: 256mb
logging:
driver: json-file
options:
max-size: 10mb
max-file: '3'
volumes:
- /srv/nextcloud/db:/var/lib/postgresql/data
environment:
- POSTGRES_USER=nextcloud
- POSTGRES_PASSWORD={POSTGRES_PASSWORD} (3)
- POSTGRES_DB=nextcloud
- POSTGRES_INITDB_ARGS="--data-checksums"
nextcloud-fpm:
container_name: nextcloud-fpm
build:
context: ./nextcloud-fpm
args:
version: '24.0.6-fpm' (4)
restart: on-failure:5
logging:
driver: json-file
options:
max-size: 10mb
max-file: '3'
depends_on:
- nextcloud-db
volumes:
- /srv/nextcloud/html:/var/www/html
- /srv/nextcloud/data:/data
environment:
- POSTGRES_HOST=nextcloud-db
- POSTGRES_DB=nextcloud
- POSTGRES_USER=nextcloud
- POSTGRES_PASSWORD={POSTGRES_PASSWORD} (5)
- NEXTCLOUD_DATA_DIR=/data
- NEXTCLOUD_UPDATE=1
- NEXTCLOUD_TRUSTED_DOMAINS='{NEXTCLOUD_DOMAIN}:{NEXTCLOUD_PORT}' (6)
nextcloud-web:
container_name: nextcloud-web
build:
context: ./httpd
args:
version: '2.4.54' (7)
restart: on-failure:5
logging:
driver: json-file
options:
max-size: 10mb
max-file: '3'
depends_on:
- nextcloud-fpm
volumes:
- /srv/nextcloud/html:/usr/local/apache2/htdocs
- /etc/letsencrypt/live/{NEXTCLOUD_DOMAIN}:/certs/live/{NEXTCLOUD_DOMAIN} (8)
- /etc/letsencrypt/archive/{NEXTCLOUD_DOMAIN}:/certs/archive/{NEXTCLOUD_DOMAIN} (9)
ports:
- "{SERVER_IP_ADDR}:{NEXTCLOUD_PORT}:443" (10)
1 | Replace {DOCKER_NEXTCLOUD_NETWORK} with the actual subnet you want to use for the Nextcloud. |
2 | Replace 14.5 with the actual latest postgres (Debian based) image version (can be checked at the Docker Hub).
You can double check if this version is supported by the Nextcloud in the Nextcloud documentation. |
3 | Replace {POSTGRES_PASSWORD} with some random password. |
4 | Replace 24.0.6-fpm with the actual latest nextcloud-fpm (Debian based) image version (can be checked at the Docker Hub). |
5 | Replace {POSTGRES_PASSWORD} with the same password as above. |
6 | Replace {NEXTCLOUD_DOMAIN} with your Nextcloud domain name and
{NEXTCLOUD_PORT} with the port number you chose to use for the Nextcloud
(the Nextcloud web server will listen on this port). |
7 | Replace 2.4.54 with the actual latest httpd (Debian based) image version (can be checked at the Docker Hub). |
8 | Replace {NEXTCLOUD_DOMAIN} with the actual domain name for the Nextcloud. |
9 | Same as above. |
10 | Replace {SERVER_IP_ADDR} and {NEXTCLOUD_PORT} with the actual values. |
HTTPD
Create a directory for the customized Apache HTTPD image:
sudo mkdir /root/silverbox/containers/nextcloud/httpd sudo chmod 700 /root/silverbox/containers/nextcloud/httpd
Inside it, create the Dockerfile
file with the following content:
ARG version=latest
FROM httpd:$version
ARG WWW_DATA_UID=33 (1)
ARG WWW_DATA_GID=33
RUN [ "$(id -u www-data)" -eq "$WWW_DATA_UID" ] && [ "$(id -g www-data)" -eq "$WWW_DATA_GID" ] || exit 1 (2)
COPY httpd.conf /usr/local/apache2/conf/httpd.conf
1 | These UID and GID are currently standard Debian based HTTPD image. |
2 | Extra precaution to ensure that www-data UID/GID are what we expect (in case they change in newer images). |
Next, create the httpd.conf
file with the following content:
ServerName {NEXTCLOUD_DOMAIN}:{NEXTCLOUD_PORT} (1)
ServerRoot "/usr/local/apache2"
Listen 443
LoadModule mpm_event_module modules/mod_mpm_event.so
LoadModule authz_core_module modules/mod_authz_core.so
LoadModule authz_host_module modules/mod_authz_host.so
LoadModule mime_module modules/mod_mime.so
LoadModule log_config_module modules/mod_log_config.so
LoadModule env_module modules/mod_env.so
LoadModule headers_module modules/mod_headers.so
LoadModule setenvif_module modules/mod_setenvif.so
LoadModule proxy_module modules/mod_proxy.so
LoadModule proxy_fcgi_module modules/mod_proxy_fcgi.so
LoadModule unixd_module modules/mod_unixd.so
LoadModule dir_module modules/mod_dir.so
LoadModule rewrite_module modules/mod_rewrite.so
LoadModule socache_shmcb_module modules/mod_socache_shmcb.so
LoadModule ssl_module modules/mod_ssl.so
LoadModule status_module modules/mod_status.so
LoadModule http2_module modules/mod_http2.so
User www-data
Group www-data
Protocols h2 http/1.1
SSLEngine On
SSLCipherSuite EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH
SSLHonorCipherOrder On
SSLProtocol -all +TLSv1.3 +TLSv1.2
SSLUseStapling on
SSLStaplingCache "shmcb:/usr/local/apache2/logs/ssl_stapling(128000)"
SSLSessionTickets Off
SSLSessionCache "shmcb:/usr/local/apache2/logs/ssl_scache(512000)"
SSLSessionCacheTimeout 300
SSLCertificateFile /certs/live/{NEXTCLOUD_DOMAIN}/fullchain.pem (2)
SSLCertificateKeyFile /certs/live/{NEXTCLOUD_DOMAIN}/privkey.pem (3)
<Directory />
AllowOverride none
Require all denied
</Directory>
DocumentRoot "/usr/local/apache2/htdocs"
DirectoryIndex index.html
<Directory "/usr/local/apache2/htdocs">
Options FollowSymLinks
AllowOverride All
Require all granted
<FilesMatch \.php$>
ProxyFCGISetEnvIf "true" SCRIPT_FILENAME "/var/www/html%{reqenv:SCRIPT_NAME}"
SetHandler proxy:fcgi://nextcloud-fpm:9000
</FilesMatch>
Header always set Strict-Transport-Security "max-age=15552000; includeSubDomains; preload"
</Directory>
<Location "/apache-server-status.html">
SetHandler server-status
Require ip {SERVER_IP_ADDR} (4)
</Location>
<Files ".ht*">
Require all denied
</Files>
ProxyTimeout 3600
<Proxy "fcgi://nextcloud-fpm/">
</Proxy>
RewriteEngine on
RewriteCond %{QUERY_STRING} ^monit$ (5)
RewriteCond %{REQUEST_METHOD} HEAD
RewriteCond %{REQUEST_URI} ^/$
RewriteRule .* - [env=dont_log]
SetEnvIf Request_URI "^/apache-server-status.html$" dont_log (6)
ErrorLog /proc/self/fd/2
LogLevel warn
LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined
LogFormat "%h %l %u %t \"%r\" %>s %b" common
CustomLog /proc/self/fd/1 common env=!dont_log
TypesConfig conf/mime.types
AddType application/x-compress .Z
AddType application/x-gzip .gz .tgz
Include conf/extra/httpd-mpm.conf
RewriteEngine On
RewriteCond %{REQUEST_METHOD} ^TRACK
RewriteRule .* - [F]
RequestHeader unset Proxy early
ServerTokens Prod
TraceEnable off
1 | Replace {NEXTCLOUD_DOMAIN} and {NEXTCLOUD_PORT} with the actual values. |
2 | Replace {NEXTCLOUD_DOMAIN} with the actual value. |
3 | Same as above. |
4 | Replace {SERVER_IP_ADDR} with the actual value. |
5 | This rewrite block matches HEAD requests to root with query string equal to "monit" and sets environment variable
dont_log that is later used to filter such requests from the web server logs.
This is useful to filter out requests done by Monit from log, as they will flood logs otherwise.
For security reasons, you can replace "monit" string in the first RewriteCond with a random alphanumeric string,
that you will also put in the Monit configuration for Nextcloud monitoring. |
6 | This rule is used to filter out requests to Apache server status page from logs, as it only used by Monit. |
If you decide to customize this config file and add some extra modules, make sure you are not using modules that don’t work well with Nextcloud. More info here: https://docs.nextcloud.com/server/stable/admin_manual/issues/general_troubleshooting.html#web-server-and-php-modules. |
Nextcloud PHP FPM
Create a directory for the customized Nextcloud PHP FPM image:
sudo mkdir /root/silverbox/containers/nextcloud/nextcloud-fpm sudo chmod 700 /root/silverbox/containers/nextcloud/nextcloud-fpm
Inside it, create the Dockerfile
file with the following content:
ARG version=fpm
FROM nextcloud:$version
ARG NFSSHARE_GID={GID} (1)
ARG WWW_DATA_UID=33 (2)
ARG WWW_DATA_GID=33
ARG PHP_FPM_CONF=/usr/local/etc/php-fpm.d/www.conf
RUN [ "$(id -u www-data)" -eq "$WWW_DATA_UID" ] && [ "$(id -g www-data)" -eq "$WWW_DATA_GID" ] || exit 1 (3)
RUN apt-get update && \
apt-get install -y --no-install-recommends supervisor libmagickcore-6.q16-6-extra && \
mkdir /var/log/supervisord /var/run/supervisord && \
sed -i 's/-l\s\+[0-9]\+/-l 5/' /cron.sh && \ (4)
sed -i 's/^\(pm.max_children\s*=\)\s*[0-9]\+/\1 20/' ${PHP_FPM_CONF} && \ (5)
sed -i 's/^\(pm.start_servers\s*=\)\s*[0-9]\+/\1 5/' ${PHP_FPM_CONF} && \
sed -i 's/^\(pm.min_spare_servers\s*=\)\s*[0-9]\+/\1 4/' ${PHP_FPM_CONF} && \
sed -i 's/^\(pm.max_spare_servers\s*=\)\s*[0-9]\+/\1 10/' ${PHP_FPM_CONF} && \
addgroup --gid ${NFSSHARE_GID} nfsshare && \
usermod www-data -aG nfsshare
COPY supervisord.conf /etc/supervisor/supervisord.conf
CMD ["/usr/bin/supervisord", "-c", "/etc/supervisor/supervisord.conf"]
1 | Replace the {GID} with the GID of your {NFS_SHARE_GROUP} group.
This is so that Nextcloud can access files in the NFS directory owned by the {NFS_SHARE_GROUP} . |
2 | These UID and GID are currently standard Debian based image. |
3 | Extra precaution to ensure that www-data UID/GID are what we expect (in case they change in newer images). |
4 | This is to reduce verbosity of the Cron logs to 5. Adjust if necessary. |
5 | Update PHP FPM configuration, feel free to adjust these values according to your needs. More information at Nextcloud [server tuning](https://docs.nextcloud.com/server/stable/admin_manual/installation/server_tuning.html#tune-php-fpm) documentation. |
Create the supervisord.conf
file with the following content:
[supervisord]
nodaemon=true
logfile=/var/log/supervisord/supervisord.log
pidfile=/var/run/supervisord/supervisord.pid
childlogdir=/var/log/supervisord/
logfile_maxbytes=10MB
logfile_backups=0
loglevel=info
user=root
[program:php-fpm]
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
command=php-fpm
[program:cron]
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
command=/cron.sh
11.3.3. Adding Firewall Rule
To add Firewall rule to allow accessing the Nextcloud do:
sudo ufw allow proto tcp to any port {NEXTCLOUD_PORT} comment "Nextcloud"
11.3.4. Adding Port Forwarding Rule
To access Nextcloud from the outside, add the port forwarding rule on your router,
to forward port {NEXTCLOUD_PORT}
to {SERVER_IP_ADDR}:{NEXTCLOUD_PORT}
.
11.3.5. Running Nextcloud
To pull/build all necessary images and run the containers do:
sudo docker compose -f /root/silverbox/containers/nextcloud/docker-compose.yml up -d
Verify that all containers have started successfully and check logs for errors:
sudo docker ps sudo docker logs nextcloud-db sudo docker logs nextcloud-web sudo docker logs nextcloud-fpm
There might be some errors in the PostgreSQL container logs, related to the unique constrain violation
on the lock_key_index .
This is due to the following bug in the Nextcloud: https://github.com/nextcloud/server/issues/6343.
Hopefully, this bug will eventually be fixed.
|
Open Nextcloud web interface https://{NEXTCLOUD_DOMAIN}:{NEXTCLOUD_PORT}
and create admin account.
11.3.6. Automatic Containers Startup
To start containers automatically (in the correct order)
on boot create the /etc/systemd/system/nextcloud-start.service
file with the following content:
[Unit] Description=Start Nextcloud Requires=docker.service After=docker.service [Service] Type=oneshot ExecStart=/usr/bin/docker compose -f /root/silverbox/containers/nextcloud/docker-compose.yml up -d [Install] WantedBy=multi-user.target
Enable the service, so that it will be started on system boot:
sudo systemctl daemon-reload sudo systemctl enable nextcloud-start.service
11.4. Configuration
This section only describes some generic post-install configuration as your configuration will highly depend on the use case.
11.4.1. Fixing Security and Setup Warnings
Navigate to Settings → Overview page and check the “Security & setup warnings” section.
Most likely you’ll see at least one warning here: Some columns in the database are missing a conversion to big int
.
To fix it, run the following commands:
sudo docker compose -f /root/silverbox/containers/nextcloud/docker-compose.yml exec --user www-data nextcloud-fpm php occ maintenance:mode --on sudo docker compose -f /root/silverbox/containers/nextcloud/docker-compose.yml exec --user www-data nextcloud-fpm php occ db:convert-filecache-bigint --no-interaction sudo docker compose -f /root/silverbox/containers/nextcloud/docker-compose.yml exec --user www-data nextcloud-fpm php occ maintenance:mode --off
Verify that warning disappeared.
If there are any other warnings, refer to the Nextcloud Admin Guide for resolutions.
11.4.2. Editing Nextcloud Config File
Edit the /srv/nextcloud/html/config/config.php
file and add/modify the following parameters:
'loglevel' => 1, (1) 'overwrite.cli.url' => 'https://{NEXTCLOUD_DOMAIN}:{NEXTCLOUD_PORT}', (2) 'htaccess.RewriteBase' => '/', (3) 'default_language' => 'en', 'default_locale' => 'en_CA', 'knowledgebaseenabled' => false, (4) 'token_auth_enforced' => true, (5) 'default_phone_region' => 'CA' (6)
1 | Sets log level to info. |
2 | This and the next line will enable pretty URLs (essentially eliminating index.php from the URLs).
More info: https://docs.nextcloud.com/server/stable/admin_manual/installation/source_installation.html#pretty-urls. |
3 | Same as above. |
4 | This line disables arguably useless knowledge base page. |
5 | Enforces token authentication for API clients for better security (will block requests using the user password). |
6 | Set to your country code (more in the Nextcloud documentation). |
Next, run the following command:
sudo docker compose -f /root/silverbox/containers/nextcloud/docker-compose.yml exec --user www-data nextcloud-fpm php occ maintenance:update:htaccess
And restart Nextcloud:
sudo docker compose -f /root/silverbox/containers/nextcloud/docker-compose.yml restart
Refresh the Nextcloud page and verify that pretty URLs work.
11.4.3. Background Jobs, Email Delivery
Navigate to Settings → Basic settings page.
Make sure Background Jobs scheduling is set to Cron and last run was within 15 minutes.
Also, on this page you can configure Email server parameters and test email delivery.
11.4.4. Access to NFS Share
It may be convenient to be able to access some directories that are shared with NFS from the Nextcloud. It can be done with the “External storage support” Nextcloud app, that allows mounting local directories inside the Nextcloud.
However, since Nextcloud is running inside a container, it is isolated from the host file system and won’t be able to access the NFS directories unless they are explicitly mounted inside the container.
To mount some directories into the Nextcloud container,
edit the /root/silverbox/containers/nextcloud/docker-compose.yml
file and append the directories to the
volumes
list of the nextcloud-fpm
service. For example:
... nextcloud-fpm: ... volumes: ... - /srv/nfs/videos:/nfs/videos - /srv/nfs/photos:/nfs/photos ...
To apply changes do:
sudo docker compose -f /root/silverbox/containers/nextcloud/docker-compose.yml stop sudo docker compose -f /root/silverbox/containers/nextcloud/docker-compose.yml kill nextcloud-fpm sudo docker compose -f /root/silverbox/containers/nextcloud/docker-compose.yml up -d
To add these directories to Nextcloud navigate to Settings → External Storages and add two
“Local” storage entries for /nfs/videos
and /nfs/photos
.
Permissions
There are some permissions issues when accessing files created via NFS from the Nextcloud and vice versa.
In particular, files created inside NFS directories from the Nextcloud will be owned by www-data:www-data
(33:33
)
and with default umask
will only allow modifications by the owner.
Thus, users accessing these files over NFS won’t be able to modify them.
The permissions/ownership for such files can be adjusted with the following command:
sudo find /srv/nfs -uid 33 -exec chown {SERVER_USER}:{NFS_SHARE_GROUP} \{} \; -exec chmod g+w {} \; -exec echo {} \;
This example command changes ownership for all files under /srv/nfs
directory that are currently owned by UID 33
,
to be owned by your user and {NFS_SHARE_GROUP}
group, and also adds write permissions to the group.
There is a similar issue with the files created via NFS and accessed via Nextcloud.
Such files by default will have ownership {SERVER_USER}:{SERVER_USER}
and won’t be modifiable by the Nextcloud
(unless your umask
allows modification by everyone).
One way to allow modifications from the Nextcloud is to set ownership to {SERVER_USER}:{NFS_SHARE_GROUP}
,
which can be done with the following command:
sudo find /srv/nfs -user $USER -group $USER -exec chgrp {NFS_SHARE_GROUP} {} \; -exec echo {} \;
When creating files from outside of the Nextcloud (e.g. over NFS), the files won’t be immediately visible
in the Nextcloud. Similarly, the changed permissions on such files won’t be immediately noticed by the Nextcloud.
To force Nextcloud to rescan the files use the following command:
sudo docker compose -f /root/silverbox/containers/nextcloud/docker-compose.yml exec --user www-data nextcloud-fpm php occ files:scan admin
|
If desired, the permission correction can be automated with inotify-tools
or similar tools
11.4.5. Security Scanning
It may be useful to run some security scanners against the Nextcluod. Here are some example:
- Nextcloud Security Scanner
- SSL Labs Scanner
-
https://www.ssllabs.com/ssltest. Note that it only works over default HTTPS port 443, so to use it you can temporary change port forwarding rule to forward from external port 443 to internal port
{NEXTCLOUD_PORT}
. - ImmuniWeb SSL Scanner
11.4.6. Reduce Autovacuum Frequency
This is completely optional step, but it may help to minimize disk writes.
In the default configuration, PostgreSQL autovacuum runs every minute,
which I find extremely excessive for my limited Nextcloud use.
Running it so frequently produces excessive disk writes by the postgres: stats collector
process.
To reduce autovaccum frequency, edit the /srv/nextcloud/db/postgresql.conf
file and change the
autovacuum_naptime
parameter to desired value, for example:
autovacuum_naptime = 15min
Restart the Nextcloud database for the setting to take effect.
11.5. Monitoring
To monitor Nextcloud status with Monit create the /etc/monit/conf.d/70-nextcloud
file with the following content:
# Containers status check program nextcloud_web with path "/usr/local/etc/monit/scripts/container_status.sh nextcloud-web .State.Status running" if status != 0 for 5 cycles then alert check program nextcloud_fpm with path "/usr/local/etc/monit/scripts/container_status.sh nextcloud-fpm .State.Status running" if status != 0 for 5 cycles then alert check program nextcloud_db with path "/usr/local/etc/monit/scripts/container_status.sh nextcloud-db .State.Status running" if status != 0 for 5 cycles then alert # HTTPS & Certificate check check host nextcloud with address {NEXTCLOUD_DOMAIN} every 5 cycles (1) if failed port {NEXTCLOUD_PORT} protocol https request /?monit and certificate valid > 15 days for 2 cycles then alert (2) # Apache status check host nextcloud_local with address {SERVER_IP_ADDR} every 5 cycles (3) if failed port {NEXTCLOUD_PORT} protocol apache-status path /apache-server-status.html (4) replylimit > 50% or requestlimit > 50% or closelimit > 50% or gracefullimit > 50% or waitlimit < 20% with ssl options {verify: disable} for 2 cycles then alert
1 | Replace {NEXTCLOUD_DOMAIN} with the actual Nextcloud domain name. |
2 | Replace {NEXTCLOUD_PORT} with the actual value.
Also, if you changed query parameter in rewrite condition for filtering Monit logs in the HTTPD section
to some random string, replace monit in the request /?monit part with the exact same string. |
3 | Replace {SERVER_IP_ADDR} with the actual value. |
4 | Replace {NEXTCLOUD_PORT} with the actual value. |
Restart Monit and verify that Nextcloud monitoring is working.
12. Git Server
This section describes how to configure the server to host some private Git repositories.
Having your own private Git repositories can be very helpful for keeping configuration files organized, notes, personal projects or forks that you prefer to keep private. Keeping your Git repositories locally on the server (as opposed to cloud services such as Github or Gitlab) has an advantage of being able to access them even when the cloud service or the internet connection is down. Server backups (described in section Backup) can be utilized for backing up Git repositories.
12.1. Overview
There are many ways to host Git repositories. One of the approaches it to host complete collaborative software development platform that usually includes version control, issue tracking, documentation (e.g. wiki), code review, CI/CD pipelines, fine-grained permission control etc. Some of the popular open-source options include Gitlab, Gitea and Gogs.
However, hosting a full-blown platform like this has its disadvantages: it increases complexity of the setup and hardware resource consumption, requires maintenance to keep platform secure and up-to-date, complicates configuration and "vendor-locks" you into a specific platform. I find that extra complexity of running such platforms is hardly justified, as for home setup most of the extra features provided are rarely needed.
Instead, this guide describes how to run bare Git repositories accessible over SSH (that has been configured in the OpenSSH Server Configuration section). This approach doesn’t depend on any third-party solutions and only requires Git and SSH to function. It is very simple, transparent and maintenance-free.
The configuration described below is very similar to the one described in the Git Book’s section on setting up the Git server [18].
12.2. Configuration
First, install git
on the server (if you don’t have it installed already):
sudo apt update sudo apt install git
Create gitusers
group:
sudo addgroup --system gitusers
A separate group is needed so that it would be possible to have more than just one user with access to Git repositories. This can be helpful, for instance, if you need to provide another user access under different permissions (for example, read-only access to repositories, as described in the next section). |
Create a directory where all of the Git repositories will reside, and assign proper ownership and permissions:
sudo mkdir /srv/git sudo chgrp gitusers /srv/git sudo chmod 750 /srv/git
12.2.1. Create Git User Account
In this model, all repositories under /srv/git
directory will be accessible by git
account created on the server.
To give someone read/write access to Git repositories a separate SSH key pair needs to be generated and its public key
added to git
authorized SSH keys list.
Create git
user account:
sudo adduser --disabled-password --gecos "" --shell `which git-shell` git
Having git-shell as the git user shell will only allow to execute Git related commands
over SSH and nothing else (also no SCP).
You can additionally customize the message that user will see on an attempt to SSH as the git user.
To do this, create executable script git-shell-commands/no-interactive-login under git user’s home directory,
that prints desired message and exits.
For more details see https://git-scm.com/docs/git-shell.
|
Make git
user a member of the gitusers
group:
sudo usermod -a -G gitusers git
Login as git
user and create .ssh
directory and authorized_keys
file with proper permissions,
as well as .hushlogin
file to suppress default Ubuntu MOTD (message of the day) banner:
sudo su --shell $SHELL git cd mkdir .ssh && chmod 700 .ssh touch .ssh/authorized_keys && chmod 600 .ssh/authorized_keys touch .hushlogin exit
As root
user edit the /etc/ssh/sshd_config
file and add git
user to the list of allowed users:
AllowUsers {SERVER_USER} git
Restart sshd
service for the changes to take effect: sudo systemctl restart sshd.service
.
Read-Only Access
As mentioned above, the git
user account will have read/write access to all Git repositories on the server.
However, sometimes it might be necessary to give someone read-only access,
so that they can read (clone) from all repositories but not write (push).
One way this can be achieved is by creating one more account on the server,
for example gitro
(where ro
stands for read only), and adding this account to the gitusers
group.
Since repositories are owned by the git
user (as you will see in the next section) and not gitro
,
the gitro
won’t be able to write.
But because gitro
belongs to gitusers
group, it will be able to read.
To create such gitro
account follow the same steps as above for creating git
account.
You will also need to generate SSH keys for the gitro
account in the same way as for git
account
(as described in the sections below).
Additionally, you can put individual repositories in read-only mode for everyone by using Git server-side hooks
(such as pre-receive
hook).
For more information on how to do this check Git documentation.
12.2.2. Creating Git Repository
To create a new Git repository on the server, first create a directory for it
(the repository is called example
in this case).
Make this directory owned by the git
user and read-only accessible by the members of gitusers
group:
sudo mkdir /srv/git/example sudo chown git:gitusers /srv/git/example sudo chmod 750 /srv/git/example
Login as git
user and initialize empty Git repository:
sudo su --shell $SHELL git cd /srv/git/example git init --bare --shared=0640 (1)
1 | The --shared=0640 argument means that the repository will be shared in such a way
that it is writeable by owner (git user), and readable (and only readable) by anyone in the gitusers group.
See man git-init for more information. |
12.2.3. Providing Access to Git
Git over SSH uses SSH keys for authentication.
To provide access to someone to all Git repositories on the server (under either git
or gitro
user, if you created one),
this person’s public key must be added to the list of authorized SSH keys for the git
(or gitro
, but this section will assume git
) account.
Below is an example of how to give yourself read/write access from your client PC to all Git repositories on the server.
First, generate a new key pair on your client PC:
ssh-keygen -t ed25519 -f ~/.ssh/silverbox-git -C "Silverbox Git key"
This will generate a pair of keys: private ~/.ssh/silverbox-git
and public ~/.ssh/silverbox-git.pub
.
Copy generated public key to the server:
scp ~/.ssh/silverbox-git.pub $USER@{SERVER_IP_ADDR}:.
Login to the server as your user and run the following command to add the public key to the list of authorized keys for git
user:
sudo bash -c "printf '%s ' 'no-port-forwarding,no-X11-forwarding,no-agent-forwarding,no-pty' | cat - /home/$USER/silverbox-git.pub >> /home/git/.ssh/authorized_keys" (1) rm silverbox-git.pub
1 | This also disables SSH tunneling, X forwarding
(although it should be disabled in your sshd config if you followed this guide) and PTY for the git user. |
On your client PC edit the ~/.ssh/config
file and add the following:
host silverbox-git HostName {SERVER_IP_ADDR} (1) IdentityFile ~/.ssh/silverbox-git User git
1 | Replace this with your server IP address or hostname. |
Now to clone example repository run:
git clone silverbox-git:/srv/git/example
13. Reverse Proxy
There are some services described in this guide (such as Monit Web UI or Transmission Web UI) that are internal and only intended to be accessible from inside the home (LAN) network. In addition, you may want to install some other services, not described in the guide, which should only be accessible from inside. As described in this guide, the suggested way of accessing such services securely (with authentication and encryption) is by establishing an SSH tunnel to each service first, and then accessing the service over the tunnel.
While this approach works, it may become quite inconvenient to access internal services this way, especially for the larger number of services. This section describes an alternative approach: using internal reverse proxy to provide access to all internal services in secure way.
13.1. Overview
Below is a diagram with an overview how such reverse proxy could be deployed:
Silverbox Server -------------------------------------------- | -------- | monit.home.example.com ------\ | | Apache |---> Internal Monit Addr | transmission.home.example.com--[HTTPS, Basic Auth]------>| HTTP |---> Internal Transmission Addr| service-a.home.example.com --/ | | Server |---> Internal Service A Addr | | -------- | | | | | {/certs} | | | | | V | | /etc/letsencrypt - *.home.example.com | --------------------------------------------
In the diagram above, a path inside curly braces indicates a path as it seen inside Docker container, while path without curly braces indicates the real path on the host file system. |
This diagram assumes you have domain for your home services home.example.com
and three services:
Monit, Transmission and some hypothetical Service A (DNS names for these services need to be configured in Unbound).
Apache web server serves as HTTPS to HTTP reverse proxy, while also (optionally) performing Basic HTTP Authentication.
Wildcard Let’s Encrypt certificate for *.home.example.com
is used for HTTPS in this example, thus allowing easy addition of services without the need to get new certificates.
This approach allows to have both authentication (using HTTP Basic Auth which is secure when used with HTTPS) and encryption (HTTPS). The basic auth can be turned off for some services if they offer strong built-in authentication.
Additionally, the reverse proxy enables HTTP2 and adds some common security headers.
This section is completely optional, as for many people who may only have one or two services which they rarely use it maybe not worth the effort of initial configuration. But if you plan on running many selfhosted services it may be very convenient to have such reverse proxy with easily expandable rules to add additional services.
13.2. Certificate
This section describes how to obtain and maintain wildcard certificate that the reverse proxy will use to setup HTTPS for all services (hence the need for wildcard and not just regular certificate).
The process is very similar to what was described in the Certificate section for Nextcloud and it relies on that section being done.
13.2.1. Preparing Domain Name
It’s assumed that the wildcard certificate will be obtained on your internal domain name {INTERNAL_DOMAIN}
.
If this is not the case, adjust instructions below accordingly.
The individual services will be sub-domains of the {INTERNAL_DOMAIN}
.
For example, if your {INTERNAL_DOMAIN}
is home.example.com
then service addresses will look like service-name.home.example.com
.
Since it’s intended that the services will be only accessible from the internal network,
there is no need to create any public A or CNAME records for the {INTERNAL_DOMAIN}
.
Only the TXT DNS record for the _acme-challenge.{INTERNAL_DOMAIN}
with any value
(the value is just a placeholder and will be replaced with the actual DNS ACME challenge during
domain validation) needs to be created.
13.2.2. Preparing Certbot Scripts
Create the /etc/letsencrypt/renewal-hooks/post/reverse-proxy-restart.sh
file with the following content:
#!/bin/bash
if [ "$CERTBOT_DOMAIN" = "{INTERNAL_DOMAIN}" ]; then (1)
echo "Restarting reverse proxy server"
docker compose -f /root/silverbox/containers/reverse-proxy/docker-compose.yml restart proxy
else
echo "Skipping reverse proxy server restart - different domain: $CERTBOT_DOMAIN"
fi
1 | Replace {INTERNAL_DOMAIN} with the actual domain name. |
This script will be executed automatically by Certbot during the renewal of a certificate, and if the renewal is for the reverse proxy certificate it will restart reverse proxy server to use the new certificate.
And assign the following permissions to this file:
sudo chmod 770 /etc/letsencrypt/renewal-hooks/post/reverse-proxy-restart.sh
13.2.3. Test Certificate
To test that domain validation and certificate renewal works, it is possible to use Let’s Encrypt test server to generate test (not trusted) certificate.
To get test certificate run:
sudo certbot certonly --test-cert \ --agree-tos \ -m {YOUR_EMAIL_ADDR} \ (1) --manual \ --preferred-challenges=dns \ --manual-auth-hook /root/silverbox/certbot/dns-challenge-auth.sh \ --manual-cleanup-hook /root/silverbox/certbot/dns-challenge-cleanup.sh \ --must-staple \ -d *.{INTERNAL_DOMAIN} (2)
1 | Replace {YOUR_EMAIL_ADDR} with the email address you wish to use for certificate generation. |
2 | Replace {INTERNAL_DOMAIN} with the actual domain name. The *. before domain name is significant - it is needed to request wildcard certificate. |
This may take a while. |
To view information about the generated certificate:
sudo certbot certificates
To test certificate renewal:
sudo certbot renew --test-cert --dry-run --cert-name {INTERNAL_DOMAIN}
To revoke and delete the test certificate:
sudo certbot revoke --test-cert --cert-name {INTERNAL_DOMAIN}
13.2.4. Getting Real Certificate
To get the real certificate run:
sudo certbot certonly \ --agree-tos \ -m {YOUR_EMAIL_ADDR} \ (1) --manual \ --preferred-challenges=dns \ --manual-auth-hook /root/silverbox/certbot/dns-challenge-auth.sh \ --manual-cleanup-hook /root/silverbox/certbot/dns-challenge-cleanup.sh \ --must-staple \ -d *.{INTERNAL_DOMAIN} (2)
1 | Replace {YOUR_EMAIL_ADDR} with the email address you wish to use for certificate generation. |
2 | Replace {INTERNAL_DOMAIN} with the actual domain name. The *. before domain name is significant - it is needed to request wildcard certificate. |
13.2.5. Automatic Certificate Renewal
The certificate should be automatically renewed by the Certbot’s Systemd service. The service should run automatically triggered by the corresponding timer. To check the status of the timer:
systemctl status certbot.timer
13.3. Installation
This section describes how to install and run the reverse proxy server.
13.3.1. Preparing Container
Create a directory for the reverse proxy container files:
sudo mkdir /root/silverbox/containers/reverse-proxy sudo chmod 700 /root/silverbox/containers/reverse-proxy
Inside it, create the docker-compose.yml
file with the following content:
version: '3.8'
services:
proxy:
container_name: reverse-proxy
image: 'httpd:2.4.54' (1)
restart: on-failure:5
network_mode: host (2)
logging:
driver: json-file
options:
max-size: 10mb
max-file: '3'
volumes:
- /etc/letsencrypt/live/{INTERNAL_DOMAIN}:/certs/live/{INTERNAL_DOMAIN}:ro (3)
- /etc/letsencrypt/archive/{INTERNAL_DOMAIN}:/certs/archive/{INTERNAL_DOMAIN}:ro (4)
- ./httpd.conf:/usr/local/apache2/conf/httpd.conf:ro
- ./htpasswd:/usr/local/apache2/.htpasswd:ro
1 | Replace 2.4.54 with the actual latest httpd (Debian based) image version (can be checked at the Docker Hub). |
2 | This puts container on the host network, rather than creating bridge network. While it may be not perfect from the isolation standpoint, it makes it very easy to proxy traffic to different services, regardless of what interface they are listening on. |
3 | Replace {INTERNAL_DOMAIN} with the actual domain name. |
4 | Same as above. |
Configuring HTTPD
Create the /root/silverbox/containers/reverse-proxy/httpd.conf
file with the following content:
Listen 443
LoadModule mpm_event_module modules/mod_mpm_event.so
LoadModule authn_core_module modules/mod_authn_core.so
LoadModule authz_core_module modules/mod_authz_core.so
LoadModule authz_host_module modules/mod_authz_host.so
LoadModule authz_user_module modules/mod_authz_user.so
LoadModule auth_basic_module modules/mod_auth_basic.so
LoadModule authn_file_module modules/mod_authn_file.so
LoadModule mime_module modules/mod_mime.so
LoadModule log_config_module modules/mod_log_config.so
LoadModule env_module modules/mod_env.so
LoadModule headers_module modules/mod_headers.so
LoadModule proxy_module modules/mod_proxy.so
LoadModule proxy_http_module modules/mod_proxy_http.so
LoadModule unixd_module modules/mod_unixd.so
LoadModule socache_shmcb_module modules/mod_socache_shmcb.so
LoadModule ssl_module modules/mod_ssl.so
LoadModule http2_module modules/mod_http2.so
User www-data
Group www-data
Protocols h2 http/1.1
SSLEngine On
SSLCipherSuite EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH
SSLHonorCipherOrder On
SSLProtocol -all +TLSv1.3 +TLSv1.2
SSLUseStapling on
SSLStaplingCache "shmcb:/usr/local/apache2/logs/ssl_stapling(128000)"
SSLSessionTickets Off
SSLSessionCache "shmcb:/usr/local/apache2/logs/ssl_scache(512000)"
SSLSessionCacheTimeout 300
SSLCertificateFile /certs/live/{INTERNAL_DOMAIN}/fullchain.pem (1)
SSLCertificateKeyFile /certs/live/{INTERNAL_DOMAIN}/privkey.pem
<Directory />
AllowOverride none
Require all denied
</Directory>
DocumentRoot "/usr/local/apache2/htdocs"
Header always set Strict-Transport-Security "max-age=15552000; includeSubDomains; preload" (2)
Header always set X-Frame-Options "DENY"
Header always set X-Content-Type-Options "nosniff"
Header always set X-XSS-Protection "1; mode=block"
RequestHeader set X-Forwarded-Proto "https"
# Monit
<VirtualHost *:443>
ServerName monit.{INTERNAL_DOMAIN} (3)
ProxyPass "/" "http://127.0.0.1:{MONIT_PORT}/" (4)
ProxyPassReverse "/" "http://127.0.0.1:{MONIT_PORT}/"
<Proxy *>
Authtype Basic
Authname "Authentication required"
AuthUserFile /usr/local/apache2/.htpasswd
Require valid-user
</Proxy>
</VirtualHost>
# Transmission
<VirtualHost *:443>
ServerName transmission.{INTERNAL_DOMAIN} (5)
ProxyPass "/" "http://127.0.0.1:{TRANSMISSION_UI_PORT}/" (6)
ProxyPassReverse "/" "http://127.0.0.1:{TRANSMISSION_UI_PORT}/"
<Proxy *>
Authtype Basic
Authname "Authentication required"
AuthUserFile /usr/local/apache2/.htpasswd
Require valid-user
</Proxy>
</VirtualHost>
<Files ".ht*">
Require all denied
</Files>
ErrorLog /proc/self/fd/2
LogLevel warn
LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined
LogFormat "%h %l %u %t \"%r\" %>s %b" common
CustomLog /proc/self/fd/1 common env=!dont_log
Include conf/extra/httpd-mpm.conf
ServerTokens Prod
TraceEnable off
1 | Replace {INTERNAL_DOMAIN} in this and next line with the actual value. |
2 | This and next three lines add some of the standard security-related headers for all proxied services. Feel free to customize this. |
3 | Replace {INTERNAL_DOMAIN} with the actual value. |
4 | Replace {MONIT_PORT} in this and next line with the actual port number you’ve chosen for Monit UI. |
5 | Replace {INTERNAL_DOMAIN} with the actual value. |
6 | Replace {SB_TRANSMISSION_PORT} in this and next line with the actual port number you’ve chosen for Transmission UI. |
If you want to add additional services to the proxy, it can be done in the similar manner, by adding VirtualHost
block for each service.
Adding Users
Install the apache2-utils
package that contains htpasswd
utility that is needed to generate file containing users and hashed passwords:
sudo apt install apache2-utils
Create the users database file, initially containing one user (you will be prompted for user’s password):
sudo htpasswd -B -c /root/silverbox/containers/reverse-proxy/htpasswd {USERNAME} (1)
1 | Replace {USERNAME} with the actual desired username. |
To add more users, refer to htpasswd
documentation [19].
13.3.2. Adding Firewall Rule
To add Firewall rule to allow accessing the reverse proxy:
sudo ufw allow proto tcp to any port 443 comment "Reverse proxy"
13.3.3. Configuring DNS
This part assumes you have configured local DNS zone as described in Configuring Local DNS Zone.
To add DNS records for the services that go through the reverse proxy edit the
/etc/unbound/unbound.conf.d/dns-config.conf
file and add local-data
record
pointing to the server IP {SERVER_IP_ADDR}
for each service you want to proxy.
Below are example records for Monit and Transmission:
server:
local-data: "monit.{INTERNAL_DOMAIN}. IN A {SERVER_IP_ADDR}" (1)
local-data: "transmission.{INTERNAL_DOMAIN}. IN A {SERVER_IP_ADDR}"
1 | In this and the next line replace {INTERNAL_DOMAIN} and {SERVER_IP_ADDR} with the actual values. |
Restart the Unbound server to apply the changes:
sudo systemctl restart unbound.service
13.3.4. Running Reverse Proxy Server
To start the reverse proxy server do:
sudo docker compose -f /root/silverbox/containers/reverse-proxy/docker-compose.yml up -d
13.3.5. Automatic Container Startup
To start the reverse proxy container automatically on boot create the
/etc/systemd/system/reverse-proxy-start.service
file with the following content:
[Unit] Description=Start Apache Reverse Proxy Requires=docker.service After=docker.service [Service] Type=oneshot ExecStart=/usr/bin/docker compose -f /root/silverbox/containers/reverse-proxy/docker-compose.yml up -d [Install] WantedBy=multi-user.target
Enable the service, so that it will be started on system boot:
sudo systemctl daemon-reload sudo systemctl enable reverse-proxy-start.service
13.4. Monitoring
To monitor the reverse proxy server with Monit create the /etc/monit/conf.d/70-reverse-proxy
file with the following content:
# Container status check program reverse_proxy with path "/usr/local/etc/monit/scripts/container_status.sh reverse-proxy .State.Status running" if status != 0 for 5 cycles then alert # HTTPS & Certificate check check host reverse_proxy_monit with address monit.{INTERNAL_DOMAIN} every 5 cycles (1) if failed port 443 protocol https request / status = 401 and certificate valid > 15 days for 2 cycles then alert (2)
1 | Replace {INTERNAL_DOMAIN} with the actual value.
In this example, Monit UI host is used to check the certificate. |
2 | Port 443 is used here, change it if you used different port for the proxy. |
Restart Monit and verify that the reverse proxy monitoring is working.
14. Firefly III
This section describes how to install and configure Firefly III (open source personal finances manager) [20] on the server.
This section depends on the following sections: Docker, Domain Name, Reverse Proxy.
14.1. Overview
The Firefly III will be deployed using Docker Compose with two Docker containers: one is the database (PostgreSQL) and the other is the official Firefly III container (which contains Apache web server and PHP application server).
Below is a diagram that shows high level overview of the Firefly III deployment:
firefly.home.example.com | Firefly III Docker Network | -------------------------------------------------------- | | ------------ ------------ | | HTTPS --------- | HTTP | | 5432/tcp | | | \------->| Reverse |------------->| Firefly III|---------->| PostgreSQL | | | Proxy | | | | | | | --------- | ------------ ------------ | | | | | | {/var/www/html/storage/upload} | | | | {/var/lib/postgresql/data} | | | | | | v v | | /srv/firefly/uploads /srv/firefly/db | | | --------------------------------------------------------
In this diagram home.example.com is used as an example value for your {INTERNAL_DOMAIN} .
|
In the diagram above, a path inside curly braces indicates a path as it seen inside Docker container, while path without curly braces indicates the real path on the host file system. |
Both containers are stateless (i.e. don’t contain any important data inside the container), since all user data is stored on the host file system and mounted inside containers. This way containers can be safely deleted and re-deployed, which makes upgrades very easy.
In this setup, Firefly III will only be accessible from the internal network via the reverse proxy (as configured in Reverse Proxy). Not exposing Firefly III to the internet is a deliberate decision, partially due to security concerns, but also for simplicity of the setup and because it seems to me that having it publicly accessible doesn’t add to much of convenience. If you wish Firefly III to be accessible from the internet you can either implement similar setup as was done for Nextcloud, or rely on VPN/WireGuard to get access to the internal network from the internet.
14.2. Installation
This section describes how to install and run Firefly III.
14.2.1. Preparing Directory Structure
The very first step is to create directories that will be mapped inside the Docker containers:
sudo mkdir -p /srv/firefly/db sudo mkdir /srv/firefly/uploads sudo chown 33 /srv/firefly/uploads (1) sudo chmod 750 -R /srv/firefly
1 | 33 is the UID of the www-data user inside the Firefly III container. |
All the Firefly III data (including database and uploaded files)
will be stored under the /srv/firefly
directory in the following way:
- db
-
The
db
subdirectory will store PostgreSQL database files. - uploads
-
The
uploads
subdirectory will store uploaded files.
It is important to keep /src/firefly directory owned by the root and have restrictive permissions
since some content inside it will be owned by the www-data (UID 33 ) user from the Docker containers.
|
14.2.2. Preparing Images
Create a directory for Firefly III containers files:
sudo mkdir /root/silverbox/containers/firefly sudo chmod 700 /root/silverbox/containers/firefly
Inside it, create the docker-compose.yml
file with the following content:
version: '3.8'
networks:
default:
name: firefly
driver: bridge
ipam:
config:
- subnet: {DOCKER_FIREFLY_NETWORK} (1)
services:
firefly-db:
container_name: firefly-db
image: postgres:14.5 (2)
restart: on-failure:5
shm_size: 256mb
logging:
driver: json-file
options:
max-size: 10mb
max-file: '3'
volumes:
- /srv/firefly/db:/var/lib/postgresql/data
environment:
- POSTGRES_USER=firefly
- POSTGRES_PASSWORD={POSTGRES_PASSWORD} (3)
- POSTGRES_DB=firefly
- POSTGRES_INITDB_ARGS="--data-checksums"
firefly-app:
container_name: firefly-app
image: jc5x/firefly-iii:version-5.5.13 (4)
restart: on-failure:5
logging:
driver: json-file
options:
max-size: 10mb
max-file: '3'
depends_on:
- firefly-db
volumes:
- /srv/firefly/uploads:/var/www/html/storage/upload
ports:
- 127.0.0.1:{FIREFLY_UI_PORT}:8080/tcp (5)
environment:
- DB_HOST=firefly-db
- DB_PORT=5432
- DB_CONNECTION=pgsql
- DB_DATABASE=firefly
- DB_USERNAME=firefly
- DB_PASSWORD={POSTGRES_PASSWORD} (6)
- APP_KEY={APP_KEY} (7)
- SITE_OWNER=mail@example.com (8)
- TZ=America/Toronto (9)
- TRUSTED_PROXIES=**
- APP_URL=https://firefly.{INTERNAL_DOMAIN} (10)
- MAIL_MAILER=smtp (11)
- MAIL_HOST=smtp.example.com
- MAIL_PORT=2525
- MAIL_FROM=changeme@example.com
- MAIL_USERNAME=foo
- MAIL_PASSWORD=bar
- MAIL_ENCRYPTION=tls
1 | Replace {DOCKER_FIREFLY_NETWORK} with the actual subnet you want to use for the Firefly III. |
2 | Replace 14.5 with the actual latest postgres (Debian based) image version (can be checked at the Docker Hub). |
3 | Replace {POSTGRES_PASSWORD} with some random password. |
4 | Replace version-5.5.13 with the actual latest jc5x/firefly-iii image version (can be checked at the Docker Hub). |
5 | Replace {FIREFLY_UI_PORT} with the actual port number you chose for Firefly III. |
6 | Replace {POSTGRES_PASSWORD} with the same password as above. |
7 | Replace {APP_KEY} with random alphanumeric string exactly 32 characters long.
Such string can be obtained by running the following command: head /dev/urandom | LANG=C tr -dc 'A-Za-z0-9' | head -c 32 . |
8 | Replace mail@example.com with your email address. |
9 | Replace America/Toronto with your preferred timezone. To check system timezone run: cat /etc/timezone . |
10 | Replace {INTERNAL_DOMAIN} with the actual value. |
11 | This block of variables refers to email delivery configuration. Configure it accordingly to your email delivery settings. Refer to Firefly III documentation for more information. |
14.2.3. Running Firefly III
To start all the containers do:
sudo docker compose -f /root/silverbox/containers/firefly/docker-compose.yml up -d
Verify that all containers have started successfully and check logs for errors:
sudo docker ps sudo docker logs firefly-db sudo docker logs firefly-app
14.2.4. Firefly III Cron
Firefly III container doesn’t include cron or anything else for running periodic jobs. Therefore a Systemd timer will be used to trigger periodic jobs for Firefly III.
Create the /etc/systemd/system/firefly-iii-cron.service
file with the following content:
[Unit] Description=Run Firefly III cron jobs Requires=docker.service After=docker.service [Service] Type=oneshot ExecStart=/usr/bin/docker exec --user www-data firefly-app /usr/local/bin/php /var/www/html/artisan firefly-iii:cron
You can run the service once to verify that it runs successfully:
sudo systemctl daemon-reload sudo systemctl start firefly-iii-cron.service
Next, create the /etc/systemd/system/firefly-iii-cron.timer
file with the following content:
[Unit] Description=Run Firefly III cron jobs [Timer] OnBootSec=15min (1) OnCalendar=daily (2) [Install] WantedBy=timers.target
1 | First time the timer runs 15 minutes after boot. |
2 | After first run, the timer will run daily, as suggested in the Firefly III documentation. |
Enable and start the timer:
sudo systemctl daemon-reload sudo systemctl enable firefly-iii-cron.timer sudo systemctl start firefly-iii-cron.timer
You can do sudo systemctl list-timers
to verify that the timer appears in the output and to check the time till next activation.
14.2.5. Automatic Containers Startup
To start containers automatically (in the correct order)
on boot create the /etc/systemd/system/firefly-start.service
file with the following content:
[Unit] Description=Start Firefly III Requires=docker.service After=docker.service [Service] Type=oneshot ExecStart=/usr/bin/docker compose -f /root/silverbox/containers/firefly/docker-compose.yml up -d [Install] WantedBy=multi-user.target
Enable the service, so that it will be started on system boot:
sudo systemctl daemon-reload sudo systemctl enable firefly-start.service
14.2.6. Adding DNS Record
To add internal DNS record for the Firefly III edit the
/etc/unbound/unbound.conf.d/dns-config.conf
file and add local-data
record
pointing to the server IP {SERVER_IP_ADDR}
:
local-data: "firefly.{INTERNAL_DOMAIN}. IN A {SERVER_IP_ADDR}" (1)
1 | In this and the next line replace {INTERNAL_DOMAIN} and {SERVER_IP_ADDR} with the actual values. |
Restart the Unbound server to apply the changes:
sudo systemctl restart unbound.service
14.2.7. Adding Reverse Proxy Configuration
To add Firefly III to the reverse proxy configuration edit the /root/silverbox/containers/reverse-proxy/httpd.conf
file
and add the following VirtualHost
section to it:
# Firefly III
<VirtualHost *:443>
ServerName firefly.{INTERNAL_DOMAIN} (1)
ProxyPass "/" "http://127.0.0.1:{FIREFLY_UI_PORT}/" (2)
ProxyPassReverse "/" "http://127.0.0.1:{FIREFLY_UI_PORT}/"
</VirtualHost>
1 | Replace {INTERNAL_DOMAIN} with the actual value. |
2 | Replace {FIREFLY_UI_PORT} in this and next line with the actual port number you’ve chosen for the Firefly III. |
This VirtualHost section above doesn’t include basic authentication configuration.
This is deliberate, as Firefly III has its own authentication.
|
Restart reverse proxy container to pick up new changes:
sudo docker restart reverse-proxy
You should now be able to access Firefly III at https://firefly.{INTERNAL_DOMAIN}
.
14.3. Monitoring
To monitor Firefly III status with Monit create the /etc/monit/conf.d/70-firefly
file with the following content:
# Containers status check program firefly_app with path "/usr/local/etc/monit/scripts/container_status.sh firefly-app .State.Status running" if status != 0 for 5 cycles then alert check program firefly_db with path "/usr/local/etc/monit/scripts/container_status.sh firefly-db .State.Status running" if status != 0 for 5 cycles then alert # HTTP check host firefly with address localhost every 5 cycles if failed port {FIREFLY_UI_PORT} protocol http request / status = 302 for 2 cycles then alert (1)
1 | Replace {FIREFLY_UI_PORT} with the actual value. |
Restart Monit and verify that Firefly III monitoring is working.
15. Backup
This section describes how to configure secure automatic backup of valuable files from the server to external drive and to the cloud.
15.1. Overview
Below is a diagram that describes the general backup strategy for the server:
Silverbox -------------- On-Site Backup Off-Site Backup | ---------- | ---------- ---------------- | | Internal | | Borg Backup | External | Rclone | Cloud (OVH) | | | Drive |-------------->| Drive |-------->| Object Storage | | ---------- | (1) ---------- (2) ---------------- --------------
As this diagram shows, the backup process consists of two steps:
-
In the first step all valuable information backed up using Borg Backup [14] to an external drive that is connected via USB. This step includes de-duplication, compression and encryption.
-
In the second step backup is synced from the external drive to the OVH cloud object storage [15] using Rclone tool [16].
While this guide uses OVH cloud, it is possible to use any other cloud that is supported by the Rclone tool. The only difference will be in Rclone configuration but the main process will be the same. |
In this model, data leaves the server already encrypted and thus can be safely stored on an unencrypted external disk and in public cloud.
In the case of main drive failure, the data can be restored from the external drive. In the case of total failure of main and external drives simultaneously, the data can be restored from the cloud.
15.2. Disk Preparation
The first step is to partition and format the external drive in ext4
filesystem.
There are plenty of excellent guides available online so these steps are out of scope of this document.
Setting up dmcrypt , ecryptfs or other encryption solutions on this drive is not necessary,
since backup files will be already encrypted. But you may still do it if you want.
|
15.2.1. Automatic Disk Mount
Once partitioned and formatted, connect the disk to the server.
Identify what name was assigned to the disk and partition by either looking at dmesg | tail
output right after the disk was connected, or by looking at the output of lsblk
command.
For example, the following output indicates that the device name is sdb
and partition name is sdb1
:
$ lsblk sdb 8:00 0 465.8G 0 disk └─sdb1 8:00 0 442.4G 0 part
The next step is to get the UUID of the partition from the output of blkid
command:
> sudo blkid /dev/sdb1 (1) /defv/sdb1: LABEL="label" UUID="XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX" TYPE="ext4" PARTUUID="XXXXXXX"
1 | Replace /dev/sdb1 with the partition device name from the previous step. |
Create a directory where the partition will be mounted:
sudo mkdir /mnt/backup
Finally, add the following line to the /etc/fstab
file:
UUID=XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX /mnt/backup ext4 auto,lazytime,nodev,nofail,errors=remount-ro 0 2 (1)
1 | Replace XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX with the actual UUID value. |
In this example, mount options include lazytime option, which is not necessary if you use HDD drive
instead of SSD and can be removed in this case.
|
Reboot the system and confirm that the backup partition was automatically mounted under /mnt/backup
:
> lsblk sda 8:0 0 465.8G 0 disk └─sda1 8:1 0 442.4G 0 part /mnt/backup
The device name (i.e. /dev/sda ) can be different after reboot.
|
15.2.2. TRIM
While TRIM should automatically work on all mounted drives that support it (thanks to default fstrim.service
),
it most likely will not work on drives connected via USB due to issues with USB <> SATA commands translation.
If your drive supports TRIM you can check whether it works by doing sudo fstrim /mnt/backup
,
but most likely you’ll see:
fstrim: /mnt/backup: the discard operation is not supported
Unfortunately, it looks like there is no way to fix this issue at the moment. However, TRIM support is not critical for backup drive as maximum write performance is not very important in this case.
15.3. Configuration
This section describes how to install all the necessary software and configure backup to the external drive and to the cloud.
15.3.1. Borg Backup
Borg backup will be used to backup valuable files from the server to the external drive.
Installation
Borg backup can be installed directly from the repositories:
sudo apt install borgbackup
Backup Repository Creation
Borg backups files to what it calls repository, which is essentially a directory on disk.
Initialize a new empty Borg repository on the external drive:
sudo borg init --encryption=repokey /mnt/backup/borgrepo
You’ll be prompted for a passphrase that will be used to generate encryption key for the backups.
Store this passphrase somewhere outside of the server, so that it can be used to decrypt backups in the case of total server failure. |
Automatic Backup Creation
Create a directory where backup related scripts will be stored:
sudo mkdir /root/silverbox/backup sudo chmod 700 /root/silverbox/backup
Create the /root/silverbox/backup/backup.sh
file with the following content:
#!/bin/sh
if pidof -x borg >/dev/null; then
echo "borg is already running"
exit 1
fi
OCC_OUTPUT=$(docker exec --user www-data nextcloud-fpm php occ maintenance:mode)
if [ "$?" -ne "0" ]; then
echo "failed to check if Nextcloud is already in maintenance mode"
exit 1
fi
if ! printf "%s" "$OCC_OUTPUT" | grep -q "Maintenance mode is currently disabled"; then
echo "unexpected occ output: $OCC_OUTPUT"
exit 1
fi
if ! docker exec --user www-data nextcloud-fpm php occ maintenance:mode --on; then
echo "failed to enable Nextcloud maintenance mode"
exit 1
fi
export BORG_PASSPHRASE='{BORG_PASSPHRASE}' (1)
# Create backup
borg create -v --stats /mnt/backup/borgrepo::'{hostname}-{now:%Y-%m-%d}' \ (2)
/etc/letsencrypt/archive \ (3)
/srv/nextcloud \
/srv/nfs \
/srv/git \
/srv/firefly \
--exclude '/srv/nfs/torrents' \
--exclude '/srv/nextcloud/html' \
--exclude '/srv/nextcloud/data/*.log' \
--exclude '/srv/nextcloud/data/*/preview' \
--exclude '/srv/nextcloud/db/*.pid' \
--exclude '/srv/nextcloud/db/*.opts' \
--exclude '/srv/nextcloud/db/pg_stat_tmp' \
--exclude '/srv/firefly/db/*.pid' \
--exclude '/srv/firefly/db/*.opts' \
--exclude '/srv/firefly/db/pg_stat_tmp'
if [ "$?" -ne "0" ]; then
echo "borg create failed"
exit 2
fi
if ! docker exec --user www-data nextcloud-fpm php occ maintenance:mode --off; then
echo "failed to disable Nextcloud maintenance mode"
exit 1
fi
# Prune old backups
borg prune -v --list /mnt/backup/borgrepo --keep-daily=3 --keep-weekly=4 --keep-monthly=6 (4)
if [ "$?" -ne "0" ]; then
echo "borg prune failed"
exit 3
fi
echo "backup completed"
1 | Set {BORG_PASSPHRASE} to your Borg passphrase. |
2 | Feel free to adjust the mask controlling how backups will be names. |
3 | This list of what to backup is just an example, adjust it according to your needs. |
4 | Feel free to adjust backup retention settings according to your needs. |
Mark this file as executable and only accessible by root:
sudo chmod 700 /root/silverbox/backup/backup.sh
To run backup script automatically on a schedule a Systemd timer is used.
Create the /etc/systemd/system/borg-backup.service
file with the following content:
[Unit] Description=Create backup using Borg backup [Service] Type=oneshot ExecStart=/bin/sh -c "/root/silverbox/backup/backup.sh"
Next, create the /etc/systemd/system/borg-backup.timer
file with the following content:
[Unit] Description=Create backup using Borg backup [Timer] OnCalendar=*-*-* 00:00:00 (1) AccuracySec=1h Persistent=true [Install] WantedBy=timers.target
1 | In this configuration backup is created daily at midnight. |
Enable and start the timer:
sudo systemctl daemon-reload sudo systemctl enable borg-backup.timer sudo systemctl start borg-backup.timer
To create the first backup and verify that everything works run the service manually:
sudo systemctl start borg-backup.service
The first backup creation may take very long time.
15.3.2. Rclone
Rclone is a tool that can synchronize local files with remote cloud storage. In this deployment it is used to sync backup files generated by Borg to remote cloud storage.
The prerequisite to this section is to have cloud storage configured and ready for use. I chose to use OVH object storage, but you can chose any storage that is supported by Rclone (list of supported storages available on Rclone website, see link in the references section).
Installation
Rclone can be installed directly from the repositories:
sudo apt install rclone
Storage Configuration
After installation, Rclone needs to be configured to work with your cloud storage.
This can either be done by running rclone config
or by putting configuration into the /root/.config/rclone/rclone.conf
file.
Since the configuration depends on what cloud provider you use, it is not described in this document. For OVH, there is a helpful article mentioned in the references to this section.
Once Rclone is configured, you can test that it has access to the storage by doing:
sudo rclone ls {REMOTE_STORAGE}:{STORAGE_PATH} -v (1)
1 | Replace {REMOTE_STORAGE} and {STORAGE_PATH} with remote storage that you configured and path respectively. |
Automatic Backup Sync
Create the /root/silverbox/backup/sync.sh
file with the following content:
#!/bin/sh
if pidof -x borg >/dev/null; then
echo "borg is already running"
exit 1
fi
if pidof -x rclone >/dev/null; then
echo "rclone is already running"
exit 1
fi
export BORG_PASSPHRASE='{BORG_PASSPHRASE}' (1)
# Check backup for consistency before syncing to the cloud
borg check -v /mnt/backup/borgrepo
if [ "$?" -ne "0" ]; then
echo "borg check failed"
exit 2
fi
# Sync backup
rclone -v sync /mnt/backup/borgrepo {REMOTE_STORAGE}:{STORAGE_PATH} (2)
if [ "$?" -ne "0" ]; then
echo "rclone sync failed"
exit 3
fi
echo "backup sync completed"
1 | Set {BORG_PASSPHRASE} to your Borg passphrase. |
2 | Replace {REMOTE_STORAGE} and {STORAGE_PATH} with the actual values. |
Mark this file as executable and only accessible by root:
sudo chmod 700 /root/silverbox/backup/sync.sh
To run backup sync script automatically on a schedule a Systemd timer is used.
Create the /etc/systemd/system/sync-backup.service
file with the following content:
[Unit] Description=Sync backup files to the cloud [Service] Type=oneshot ExecStart=/bin/sh -c "/root/silverbox/backup/sync.sh"
Next, create the /etc/systemd/system/sync-backup.timer
file with the following content:
[Unit] Description=Sync backup files to the cloud [Timer] OnCalendar=Mon *-*-* 03:00:00 (1) AccuracySec=1h Persistent=true [Install] WantedBy=timers.target
1 | In this configuration backup is synced every Monday at 3 am. The reason sync is done only once a week is to save some bandwidth and data. |
Enable and start the timer:
sudo systemctl daemon-reload sudo systemctl enable sync-backup.timer sudo systemctl start sync-backup.timer
To run the initial sync and verify that everything works run the service manually:
sudo systemctl start sync-backup.service
The first sync may take very long time (depending on your internet bandwidth and backup size).
15.4. Monitoring
The monitoring for backups consists of three parts: monitoring backup disk status (space use, temperature, health), monitoring Borg service status and monitoring Rclone service status.
15.4.1. Backup Disk Monitoring
The backup disk monitoring can be done exactly the same way as the main disk monitoring,
as described in Basic System Monitoring section (assuming your disk has temperature sensor).
However, my disk wasn’t in the hddtemp
database so it had to be added manually.
First check if the disk is supported by smartctl
and if any extra parameters has to be added
(in the case of my disk the -d sat
extra parameter has to be passed to smartctl
).
For the list of USB disks supported by smartctl
see the references section.
To find in what field disk reports temperature check the output of:
sudo smartctl -a -d sat /dev/sda (1)
1 | Replace /dev/sda with your backup disk device. |
Then append the following line to the /etc/hddtemp.db
file:
"Samsung Portable SSD T5" 190 C "Samsung Portable SSD T5 500GB" (1)
1 | Replace disk name with the name as it was reported by smartctl
and replace 190 with the temperature field number. |
To monitor backup disk with Monit, append the following to the /etc/monit/conf.d/10-system
file:
# Backup Filesystem check filesystem backupfs with path /mnt/backup if space usage > 70% then alert if inode usage > 60% then alert if read rate > 2 MB/s for 10 cycles then alert if write rate > 1 MB/s for 30 cycles then alert # Backup disk temperature check program backup_disk_temp with path "/usr/local/etc/monit/scripts/disk_temp.sh {PART_UUID}" (1) if status > 60 then alert if status < 15 then alert
1 | Replace {PART_UUID} with your backup partition UUID. |
Restart Monit with sudo systemctl restart monit
and verify that monitoring works.
Additionally, you can add backup disk temperature and health status reporting to the summary email
(see Email Content Generation section).
Copy the lines for main disk status reporting and replace UUID with your backup disk UUID.
Don’t forget to add extra parameters to smartctl
command if needed (e.g. -d sat
).
15.4.2. Borg & Rclone Monitoring
To monitor status of Borg and Rclone services,
create the /etc/monit/conf.d/80-backup
file with the following content:
check program borg_backup with path "/usr/local/etc/monit/scripts/is_systemd_unit_failed.sh borg-backup.service" every 60 cycles if status != 0 for 2 cycles then alert check program sync_backup with path "/usr/local/etc/monit/scripts/is_systemd_unit_failed.sh sync-backup.service" every 60 cycles if status != 0 for 2 cycles then alert
Restart Monit to update the rules and verify that monitoring works.
15.5. Restore Procedure
The exact restore procedures depend on the type of failure that occurred.
To get files from the cloud, use rclone
tool or download files manually.
To extract particular backup from the Borg repository use borg extract
command.
Normally, the restored files can be just copied over lost or damaged files. For more details, please refer to references section below.
15.6. References
-
Borg documentation: https://borgbackup.readthedocs.io/en/stable/index.html
-
Rclone documentation: https://rclone.org/docs
-
Article on configuring Rclone with OVH object storage: https://docs.ovh.com/ca/en/storage/sync-rclone-object-storage
-
Article on using Borg and Rclone for backup: https://opensource.com/article/17/10/backing-your-machines-borg
-
Nextcloud documentation on restoring from backup: https://docs.nextcloud.com/server/latest/admin_manual/maintenance/restore.html
-
USB disks supported by
smartctl
: https://www.smartmontools.org/wiki/Supported_USB-Devices
16. Maintenance
This section describes some basic maintenance procedures that will hopefully help to keep the server up to date, healthy and running. However, don’t consider this to be a complete checklist.
16.1. Keeping System Up to Date
This section describes how to keep the system and services running on it up-to-date.
16.1.1. Ubuntu
Ubuntu will install security updates automatically, but other updates have to be installed manually.
To update package information database do:
sudo apt update
To install all available upgrades do:
sudo apt upgrade
Some upgrades may require system reboot.
To check if system reboot is required, check if the file /var/run/reboot-required exists,
for example with ls /var/run/reboot-required command.
|
16.1.2. Docker & Docker Compose
Docker engine will be updated as part of the regular Ubuntu packages update.
To update Docker Compose, first check what is the current installed version:
docker compose version
Next, check what is the latest release version:
curl --silent https://api.github.com/repos/docker/compose/releases/latest | grep tag_name
If newer version available, it can be installed the same way as described in the Docker Compose section.
16.1.3. Docker Images
The first step is to check if newer Docker images (including base images) are available. Unfortunately, at the moment of writing Docker didn’t have such functionality out of the box. One workaround would be to use 3rd party tools that can do the job, such as Watchtower [17], but I decided not to use it due to lack of flexibility and security considerations (as it requires access to the Docker engine).
While available Docker image versions can be checked on the Docker Hub website, it is not very convenient way to do this if you have to check multiple images. Arguably better solution is to use the script below that checks image version locally and compares it against image versions fetched from the Docker Hub. It then presents sorted lists of local and remote image tags (versions) in a format that makes it easy to identify what (if any) newer images are available. This is not the best solution, as ideally I would like this script to simply give an answer if a newer version is available or not, but at the moment I found no easy way to do this via Docker registry API.
Since the script uses jq
tool to parse JSON, it needs to be installed first:
sudo apt install jq
To create this script, create the /usr/local/sbin/check-docker-tags.sh
file with the following content:
#!/bin/sh
# List of images to check
IMAGES="debian httpd postgres" (1)
for IMAGE in $IMAGES; do
echo "> Checking Image: $IMAGE"
LOCAL_TAGS=`sudo docker images $IMAGE --format="{{.Tag}}" | sort -V -r`
if [ -z "$LOCAL_TAGS" ]; then
echo "> No image tags found locally"
else
echo "> Local Tags:"
for LOCAL_TAG in $LOCAL_TAGS; do
echo " $LOCAL_TAG"
done
REMOTE_TAGS=`curl -s https://registry.hub.docker.com/v2/repositories/library/$IMAGE/tags/?page=1\&page_size=25 | jq .results[].name | sort -V -r | xargs`
echo "> Remote Tags:"
for REMOTE_TAG in $REMOTE_TAGS; do
FOUND=0
for LOCAL_TAG in $LOCAL_TAGS; do
if [ "$LOCAL_TAG" = "$REMOTE_TAG" ]; then
FOUND=1
break
fi
done
[ $FOUND -eq 1 ] && echo -n " > "
[ $FOUND -eq 0 ] && echo -n " "
echo "$REMOTE_TAG"
done
fi
echo ""
done
1 | Set this variable to the list of images that you want to check. |
Assign the following permissions to the file:
sudo chmod 755 /usr/local/sbin/check-docker-tags.sh
To run the script simply do check-docker-tags.sh
.
The sections below describe how to update Docker images when new image or new base image version is available.
Updating SOCKS5 Proxy Image
Edit the /root/silverbox/containers/vpn-proxy/docker-compose.yaml
file
and update the Debian image version in the build arguments:
args:
version: '11.5-slim'
Stop the container, rebuild the image and start new container from the new image:
sudo docker compose -f /root/silverbox/containers/vpn-proxy/docker-compose.yaml down sudo docker compose -f /root/silverbox/containers/vpn-proxy/docker-compose.yaml build --pull sudo docker compose -f /root/silverbox/containers/vpn-proxy/docker-compose.yaml up -d
Since the SSH host identity key is stored outside of the container on disk, proxy should reconnect seamlessly.
Updating DNS Updater Image
Edit the /root/silverbox/containers/dns-updater/Dockerfile
file and update the Debian image version:
FROM debian:11.5-slim
Build the new image:
sudo docker build -t dns-updater --network common /root/silverbox/containers/dns-updater
Since the image is only used in a disposable container, the new image will be picked up automatically next time the service runs.
Updating Transmission Image
Edit the /root/silverbox/containers/transmission/docker-compose.yaml
file
and update the Debian image version in the build arguments:
args:
version: '11.5-slim'
Stop the container, rebuild the image and start new container from the new image:
sudo docker compose -f /root/silverbox/containers/transmission/docker-compose.yaml down sudo docker compose -f /root/silverbox/containers/transmission/docker-compose.yaml build --pull sudo docker compose -f /root/silverbox/containers/transmission/docker-compose.yaml up -d
Updating Reverse Proxy Image
Edit the /root/silverbox/containers/reverse-proxy/docker-compose.yml
file
and update the Apache httpd image version:
image: 'httpd:2.4.54'
Stop the container, pull the new image and start new container from the new image:
sudo docker compose -f /root/silverbox/containers/reverse-proxy/docker-compose.yml down sudo docker compose -f /root/silverbox/containers/reverse-proxy/docker-compose.yml pull sudo docker compose -f /root/silverbox/containers/reverse-proxy/docker-compose.yml up -d
16.1.4. Nextcloud
By default, Nextcloud will automatically show notification when new version is available. This can also be checked on the Settings → Overview page.
The upgrade procedures for Nextcloud are described in the Nextcloud admin guide (https://docs.nextcloud.com/server/stable/admin_manual/maintenance/upgrade.html) and in the Nextcloud Docker images repository readme (https://github.com/nextcloud/docker#update-to-a-newer-version).
While Nextcloud supports skipping point releases (e.g. upgrading from 15.0.1 to 15.0.3 while skipping 15.0.2), the admin guide recommends installing all point releases. |
Before upgrading, it is good idea to consult changelog (https://nextcloud.com/changelog) to see what is new in the new release and check if any extra steps are required during an upgrade.
To upgrade Nextcloud, first tear down the existing containers:
sudo docker compose -f /root/silverbox/containers/nextcloud/docker-compose.yml down
Edit the /root/silverbox/containers/nextcloud/docker-compose.yml
file
and update Docker image version for nextcloud-fpm
.
You can also update images for the Postgres database (nextcloud-db
)
and the Apache httpd web server (nextcloud-web
), if newer versions are available.
If upgrading Postgres image, make sure that the new version is supported by the Nextcloud
and check the Postgres documentation to see if any extra upgrade steps are required.
Pull and build new images:
sudo docker compose -f /root/silverbox/containers/nextcloud/docker-compose.yml pull sudo docker compose -f /root/silverbox/containers/nextcloud/docker-compose.yml build --pull
Start Nextcloud:
sudo docker compose -f /root/silverbox/containers/nextcloud/docker-compose.yml up -d
Open the Nextcloud UI and upgrade prompt should appear. Upgrade can be initiated from this prompt.
After upgrade, navigate to the Settings → Overview page and see if any new warnings have appeared. If you see any warnings, consult Nextcloud admin guide on how to fix them.
16.1.5. Firefly III
Checking for a new version can be done in the Web UI. Navigate to Options → Administration → Check for updates page and click Check Now! button under the Check for updates now section.
The upgrade instructions for Firefly III are described in the documentation (https://docs.firefly-iii.org/advanced-installation/upgrade). For Docker Compose deployment it is simply updating Docker images.
Before upgrading, it is good idea to consult changelog (https://github.com/firefly-iii/firefly-iii/blob/main/changelog.md) to see what is new in the new release and check if any extra steps are required during an upgrade.
To upgrade Firefly III, first tear down the existing containers:
sudo docker compose -f /root/silverbox/containers/firefly/docker-compose.yml down
Edit the /root/silverbox/containers/firefly/docker-compose.yml
file
and update Docker image version for firefly-app
.
You can also update image for the Postgres database (firefly-db
), if newer image is available.
Pull new images:
sudo docker compose -f /root/silverbox/containers/firefly/docker-compose.yml pull
Start Firefly III:
sudo docker compose -f /root/silverbox/containers/firefly/docker-compose.yml up -d
16.2. Monitoring
This section gives some ideas on how to monitor system health and overall status.
16.2.1. Monit
Monit has a lot of useful information about the system and running services, and it worth checking it from time to time.
This information can either be viewed in the Monit web interface, or via the sudo monit status
command.
16.2.2. Logs
Logs from the system and different services can provide a lot of useful information and should be checked periodically for anomalies and errors.
System Logs
Ubuntu collects system logs with Systemd Journald service, so most of the logs can be viewed using journalctl
tool.
For example, to view all log messages since system boot with priority level warning or above do:
journalctl -p warning -b
For more examples on how to use journalctl
to view logs refer to journalctl
documentation.
Some logs are also written into files under /var/log
directory
and can be viewed with any text editor or with cat
/tail
commands.
Systemd Service Logs
Logs from Systemd services can be viewed with the journalctl
command as well.
For example, to view Docker service logs do:
journalctl -u docker.service
Docker Logs
By convention, processes running inside Docker containers write log output to the standard output stream.
These logs then collected by the Docker engine and can be viewed with docker logs
command.
For example, to view Nextcloud’s PostgreSQL container logs do:
sudo docker logs nextcloud-db
Nextcloud Logs
Apart from the Nextcloud containers logs, Nextcloud maintains its own log that can be viewed on the Settings → Logging page.
16.2.3. Disk Health
Monitoring disk status and health is crucial in preventing data loss and diagnosing performance issues. It is especially important for SSDs since every SSD cell have limited number of times it can be erased and written.
Some information about disks and file systems is available in Monit.
To view how much data was read/written to the disk since system boot you can use vmstat -d
command.
The output of this command is in sectors rather than bytes.
To find sector size check the output of sudo fdisk -l
command.
It appears that the system counts discarded blocks
(i.e. free blocks reported to SSD when fstrim is done, by default once a week) as writes,
thus inflating total sectors written count as reported by the vmstat -d command.
This means that the vmstat -d output will only be accurate since reboot until the first fstrim run.
|
To view what processes are utilizing the disk you can use iotop
tool, for example:
sudo iotop -aoP
To install iotop
do: sudo apt install iotop
.
To retrieve SMART (Self-Monitoring, Analysis and Reporting Technology) data from the disk you can use smartctl
tool
from the smartmontools
package.
To read SMART data from the disk do:
sudo smartctl -a /dev/sda (1)
1 | Replace /dev/sda with the actual disk device. |
The actual output of this command will depend on the disk model and manufacturer. Usually it has a lot of useful information such as total number of blocks written, media wearout, errors, etc.
However, the output from smartctl
is only accurate if the disk is present in the smartctl
database so that
SMART fields can be decoded and interpreted correctly.
Usually the database has most of the consumer SSDs, however, Ubuntu uses extremely outdated version of this database
so there is a good chance you disk won’t be there.
If in the smartctl
output you see line similar to this: Device is: Not in smartctl database
,
this means your disk is not in the current database and you cannot really trust the output.
Normally the smartctl
database can be easily updated using update-smart-drivedb
script, however,
for dubious reasons Ubuntu package maintainers decided not to include this script in the smartmontools
package.
Fortunately, this database is just a single file that can be downloaded from the smartmontools
GitHub mirror:
wget https://github.com/mirror/smartmontools/raw/master/drivedb.h
This new database file can then be passed to smartctl
like so:
sudo smartctl -a /dev/sda -B drivedb.h
It is very important to monitor SSD media wearout and quickly find and diagnose abnormally high write rates to prevent possible unexpected data loss and disk failure. Even though modern SSDs are quite durable and smart about wear leveling, one service writing tons of logs non stop could be enough to wear the disk prematurely.
16.3. Keeping System Clean
This section gives some ideas on how to keep the system clean and how to prevent accumulation of unnecessary or obsolete information.
16.3.1. Cleaning System Packages
To remove no longer required packages do:
sudo apt autoremove
16.3.2. Cleaning Docker
To view how much space is used by Docker:
sudo docker system df
A lot of unused Docker images can accumulate after upgrades. To remove all dangling images do:
sudo docker image prune
This, however, only removes dangling images, but leaves images that are not dangling but can still be unused.
All unused images can be removed with the -a
flag,
but this is dangerous as some images that are not used at the moment
can still be required later (for example, the dns-updater
image).
One solution is to remove all currently unused images semi-manually:
sudo docker rmi `sudo docker images -q nextcloud:*` sudo docker rmi `sudo docker images -q httpd:*` sudo docker rmi `sudo docker images -q postgres:*`
This will generate errors and skip deletion of images that are currently in use. |
To clean Docker build cache do:
sudo docker builder prune
16.3.3. Cleaning and Minimizing Logs
Cleaning Old System Logs
System logs will grow over time. To check size of all Journald logs:
journalctl --disk-usage
Journald logs can be easily cleaned by size:
sudo journalctl --vacuum-size=500M
or by time:
sudo journalctl --vacuum-time=2years
Adjusting Journald Configuration
In the default Journald configuration in Ubuntu the Journald messages are also forwarded to syslog,
which is unnecessary (unless you use specific tools that rely on that).
This can be disabled by setting ForwardToSyslog
parameter to no
in the /etc/systemd/journald.conf
file.
Additionally, to potentially reduce writes to disk you can increase SyncIntervalSec
parameter
in the /etc/systemd/journald.conf
file.
This parameter controls how frequently Journald messages are synced to disk,
so only increase it if the server is connected to reliable UPS and unexpected shutdowns are unlikely.
16.3.4. Disabling Motd-News
By default, Ubuntu will fetch news daily to show them in the message of the day (motd),
which I find rather annoying and unnecessary flooding the logs.
To disable it, edit the /etc/default/motd-news
file and change ENABLED
parameter to 0
.
While this removes news from the motd, it doesn’t stop motd-news
timer.
To stop and disable the timer, do:
sudo systemctl stop motd-news.timer sudo systemctl disable motd-news.timer
References
-
[1] “Silverbox Changelog”: https://github.com/ovk/silverbox/blob/master/CHANGELOG
-
[2] “Network UPS Tools: Hardware compatibility list”: https://networkupstools.org/stable-hcl.html
-
[3] “Arch Wiki: Solid state drive”: https://wiki.archlinux.org/index.php/Solid_state_drive
-
[4] “ssh-audit script”: https://github.com/arthepsy/ssh-audit
-
[5] “Arch Wiki: Hddtemp”: https://wiki.archlinux.org/index.php/Hddtemp
-
[6] “Unbound DNS Server”: https://nlnetlabs.nl/projects/unbound
-
[7] “Cloudflare DNS Server”: https://www.cloudflare.com/learning/dns/what-is-1.1.1.1
-
[8] “Docker Compose”: https://docs.docker.com/compose
-
[9] “NameSilo”: https://namesilo.com
-
[10] “Transmission”: https://transmissionbt.com
-
[11] “Nextcloud”: https://nextcloud.com
-
[12] “Let’s Encrypt”: https://letsencrypt.org
-
[13] “Certbot”: https://certbot.eff.org
-
[14] “Borg Backup”: https://www.borgbackup.org
-
[15] “OVH Object Storage”: https://www.ovh.com/world/public-cloud/storage/object-storage
-
[16] “Rclone”: https://rclone.org
-
[17] “Watchtower”: https://containrrr.github.io/watchtower
-
[18] “Git Book: Git Server”: https://git-scm.com/book/en/v2/Git-on-the-Server-Setting-Up-the-Server
-
[19] “htpasswd”: https://httpd.apache.org/docs/2.4/programs/htpasswd.html
-
[20] “Firefly III”: https://www.firefly-iii.org