Arch Linux based File Server (Ancillary File Services)
Since my fileserver tennessine is a file server first and foremost, we need to have some kind of network file service installed and set up. Much of this depends greatly on what kind of hosts you have on your network. Since most of my devices are Linux, Android, or macOS/Apple iOS systems, I chose the Network File System (NFS), which was originally developed by Sun Microsystems for UNIX (SunOS and Solaris). Other file services include Samba/SMB/CIFS (for sharing files with Microsoft Windows from Linux/UNIX/macOS), shairplay (a macOS AirPlay protocol service), or even SSHFS (a network file system based on SFTP). Also, it bears mentioning that the default configuration of openssh enables the SFTP (Secure File Transfer Protocol) service, so if sshd is running SFTP is likely also available, unless explicitly disabled by the system administrator. SFTP can be used by utilities like rsync, or anything that uses SFTP for transporting files.
Network File Service (NFS)
These instructions are just enough to get the NFS service up and running, there are quite a few options for configuring NFS, and I don't pretend to know everything about NFS. The Arch Wiki NFS article goes into great depth on this topic. Note that NFS comes with no user authentication or encryption, so it's best only to install it on a network you trust. For me, I make it available on my LAN (10.20.30.0/24), which is my untagged VLAN for my home network.
If you want greater security for NFS, network authentication subsystems such as Kerberos (krb5) can be used, but their setup can be quite involved, and is out of scope for this series of articles.
Server Setup
On the server, I installed nfs-utils, which will install some dependencies. There are two files I need to edit, which get installed when nfs-utils is installed: /etc/nfs.conf, and /etc/exports.
/etc/nfs.conf
Over time, the maintainers of this package will update the default configuration, according to best practices for performance. These will get installed to /etc/nfs.conf.pacnew, which you'll want to incorporate into your configuration. The main thing I set is the rootdir parameter, which I set to /srv/nfs. The Arch Wiki NFS article mentions this rootdir parameter is ignored by NFSv4, but it does seem to be working for me. This directory actually contains bind mounts to /data/media/music and /data/media/video, where I store my music and video collections. More on the bind mounts in /etc/fstab on my client systems later on down below. Thus far, this is all I make available via NFS.
/etc/exports
This is where the directories that are made available over NFS are exported. It contains two lines, beginning with the subdirectories relative to /srv/nfs (see the /etc/nfs.conf parameter rootdir above):
/music 10.20.30.0/24(rw,sync,no_subtree_check,no_root_squash)
/video 10.20.30.0/24(rw,sync,no_subtree_check,no_root_squash)
See the man 5 exports
manual for details on the parameters used here. I want LAN clients to be able to write to these NFS shares, hence the rw parameter. I allow every host on the LAN read-write access. If you want to be more restrictive, you can specifiy single IP addresses or hostnames in a space-separated list of hosts allowed. You can also mix and match read-only hosts with read-write hosts, but I have not done that here. According to the Arch Wiki NFS article, I need to add the following line to /etc/exports:
/srv/nfs *(rw,sync,no_subtree_check,no_root_squash,fsid=0)
But I haven't done that, it seems to be honoring the rootdir parameter. I most definitely am using the NFSv4 option on my client, more on that in the client section below.
Start service
Once NFS is set up, I can enable and start the nfs-server service:
sudo systemctl enable --now nfs-server.service
That's it for setting up the NFS server! If you modify /etc/exports while nfs-server.service is running, you don't need to restart the service (which could interrupt any ongoing transfers). You may simply run exportfs -arv
as root to reread /etc/exports and update which hosts have access to the exported directories, or add new exports as you need.
Client Setup
As with the server, install the nfs-utils package. This will provide the mount.nfs executable, which gives the user acces to mount remote NFS shares (directories exported by a remote NFS service). You can use mount.nfs to manually mount these to a local mountpoint, but also the meta mount program to do the same. See man mount.nfs
for NFS-specific options.
The main client I have this set up on is osmium, an old ThinkStation D20 workstation, on which I have various services running. One of those is plex-media-server-plexpass, which I use as my local Plex server. Since this system has limited disk space, I actually store all of the media (audio and video) on my file server, tennessine. To Plex, the media files appear to be locally-mounted directories, and the fact that they're remote NFS shares is completely transparent (Plex neither knows nor cares about this fact). I have tweaked this NFS setup over the years, as the performance of NFS has a direct impact on the playback of especially video on my local Plex clients.
Rather than manually mount the NFS shares, I have added the following lines to /etc/fstab on osmium:
tennessine:/music /media/music nfs _netdev,x-systemd.automount,x-systemd.device-timeout=10,x-systemd.idle-timeout=1min,x-systemd.requires=network-online.target,rw,nfsvers=4,timeo=14,rsize=1048576,wsize=1048576 0 0
tennessine:/video /media/video nfs _netdev,x-systemd.automount,x-systemd.device-timeout=10,x-systemd.idle-timeout=1min,x-systemd.requires=network-online.target,rw,nfsvers=4,timeo=14,rsize=1048576,wsize=1048576 0 0
The main piece is the remote share name, namely tennessine:/music
and tennesine:/video
, with the remote paths relative to the NFS rootdir (the client need not know what this root directory actually is). Next are the mount points in the local filesystem hierarchy (/media/music and /media/video, respectively). Also, the filesystem type is nfs
, followed by the mount options. See the man mount.nfs
manual page for a complete description of these options. From the client I'm enforcing NFSv4 with the nfsvers=4
parameter. You can choose NFSv3, but that's all the QNAP OS supported, and I always had major issues with that before I deployed tennessine. The systemd parameters allow systemd to manage the mounting and logging of these NFS shares on the client. The only other thing of note are the rsize
and wsize
parameters. These are (or at least, were) the maximum allowable read/write buffer sizes for NFS. Without setting these parameters to the maximum, video playback in the Plex client would buffer every few moments, making watching videos horribly unpleasant. These are also set up to be mounted on boot, but a reboot isn't absolutely necessary to mount these /etc/fstab entries. Simply sudo mount -a
and it should work fine.
That's about it for NFS!
Conclusion
Since I don't currently have any other ancillary file services running (other than the default SFTP which is part of openssh), I won't cover anything else in this series of articles.
NEXT STEPS
This is the last article in my Arch Linux based File Server series! Please let me know what you think by posting a comment below! The following articles describe how I achieved the design and implementation of my Arch Linux based file server, tennessine.
Please leave feedback below if you have any comments or questions!
Comments ()