Install Arch Linux on QNAP NAS

PURPOSE

I have an old QNAP TS-559 Pro+ I purchased in 2010, with five 2T Samsung SATA mechanical hard drives.  I never really liked the QNAP OS (which is basically a nonstandard Linux distribution with a disgusting web GUI), but at the time I had no experience with RAID or multi-disk setups, and I was using this as my main data storage for over ten years.  I'd been trying to monitor the disks with SMART, but QNAP wasn't very good at notifying me of things going wrong.

Now, with all of the security problems with the QNAP OS over the last couple of years, I've decided to throw it out and replace it with Arch Linux. The inspiration I've had comes from the QNAP Firmware Recovery Guide for x86-based NAS itself.

I have set up my new NAS from scratch, from a refurbished Dell R730, with eight 14T drives.  I have yet to write an article describing that setup, but I started with Arch Linux ant Btrfs and I've been quite happy with it.  Now it's time to do the same with this QNAP, and possibly use it as a stage for restoring and testing my Borg Backup.  Hopefully these eleven year old disks are up to the task, we'll see!

PREREQUISITES

  • A QNAP NAS with an x86_64 CPU with at least 512M RAM(in my case, the QNAP TS-559 Pro+ has an x86_64 dual-core [four virtual Hyperthreading cores] Intel Atom CPU and 1G RAM).
  • The latest copy of the Arch Linux ISO.  Standard Arch Linux is only compatible with x86_64/amd64 processors, and requires at least 512M RAM.
  • Hard drives which you don't care about the data on them.  The firmware guide linked above instructs the user to disconnect all hard drives, but I will not do that since I want to wipe the mdadm RAID headers and replace them with Btrfs.  This will DESTROY ALL DATA ON THESE DRIVES.

PROCEDURE

  1. Power down the QNAP NAS, and connect a keyboard and monitor.

  2. Copy the Arch ISO to a USB flash drive, and insert it into one of the USB ports in the back. There may be a USB port in the front (mine is labeled COPY, I didn't use it to boot). These are steps 1.1 through 1.3 of the current Arch Linux Installation Guide.

  3. Power on the QNAP NAS to boot. The American Megatrends BIOS should display on initial boot. Pressing DEL should take you to the BIOS Setup, where you can choose the USB boot device order. I looked at this first, but didn't change anything. I rebooted after reviewing the BIOS setup.

  4. There is a separate boot menu to select a boot device (separate from the BIOS Setup menu). For me, pressing F11 brought up the boot device menu. Select your inserted USB drive (the name will vary, for me it was "Generic USB flash drive").

  5. If all goes well, you should be greeted with the Arch Linux boot menu. I don't have any special needs, so I simply chose the default option and booted Arch Linux by pressing Enter.

  6. After a few moments, the Arch ISO root prompt should appear. This is step 1.4 of the Arch Linux Installation Guide.

  7. The first command I ran was free -h, to ensure I had enough RAM to proceed with the installation. Here's what I saw:

             total     used         free    shared  buff/cache  available
    Mem:	 961Mi	  127Mi        306Mi	 174Mi       528Mi      522Mi
    Swap:       0B       0B           0B        0B          0B         0B
    
  8. Next, I ran lsblk -f to see what disks are available:

    NAME      FSTYPE            FSVER            LABEL          UUID                                 FSAVAIL FSUSE% MOUNTPOINTS
    loop0     squashfs          4.0                                                                        0   100% /run/archiso/airootfs
    sda                                                                                                             
    |-sda1    linux_raid_member 0.90.0                          2a4fc966-b266-152a-a95d-b8568abffd89                
    | `-md125 ext3              1.0                             00da1ca2-853a-425a-9d24-5b6b781853b7                
    |-sda2    linux_raid_member 0.90.0                          a901d9ec-9995-4602-7dfd-a0917d4560a8                
    | `-md127 swap              1                               758bf748-4844-49d9-b996-5926c0f951c8                
    |-sda3    linux_raid_member 0.90.0                          824fdb0c-1b9c-413d-d990-7eb5c420603e                
    | `-md126 ext4              1.0                             2c70f0c7-9dcf-40bb-8a6d-69a0e4dece53                
    `-sda4    linux_raid_member 0.90.0                          060a74d9-e9fd-f8d0-f1df-d4da5ea31708                
      `-md124 ext3              1.0                             2df61ff4-fd5b-48b4-8917-b2c75b7ee9a7                
    sdb                                                                                                             
    |-sdb1    linux_raid_member 0.90.0                          2a4fc966-b266-152a-a95d-b8568abffd89                
    | `-md125 ext3              1.0                             00da1ca2-853a-425a-9d24-5b6b781853b7                
    |-sdb2    linux_raid_member 0.90.0                          a901d9ec-9995-4602-7dfd-a0917d4560a8                
    | `-md127 swap              1                               758bf748-4844-49d9-b996-5926c0f951c8                
    |-sdb3    linux_raid_member 0.90.0                          824fdb0c-1b9c-413d-d990-7eb5c420603e                
    | `-md126 ext4              1.0                             2c70f0c7-9dcf-40bb-8a6d-69a0e4dece53                
    `-sdb4    linux_raid_member 0.90.0                          060a74d9-e9fd-f8d0-f1df-d4da5ea31708                
      `-md124 ext3              1.0                             2df61ff4-fd5b-48b4-8917-b2c75b7ee9a7                
    sdc                                                                                                             
    |-sdc1    linux_raid_member 0.90.0                          2a4fc966-b266-152a-a95d-b8568abffd89                
    | `-md125 ext3              1.0                             00da1ca2-853a-425a-9d24-5b6b781853b7                
    |-sdc2    linux_raid_member 0.90.0                          a901d9ec-9995-4602-7dfd-a0917d4560a8                
    | `-md127 swap              1                               758bf748-4844-49d9-b996-5926c0f951c8                
    |-sdc3    linux_raid_member 0.90.0                          824fdb0c-1b9c-413d-d990-7eb5c420603e                
    | `-md126 ext4              1.0                             2c70f0c7-9dcf-40bb-8a6d-69a0e4dece53                
    `-sdc4    linux_raid_member 0.90.0                          060a74d9-e9fd-f8d0-f1df-d4da5ea31708                
      `-md124 ext3              1.0                             2df61ff4-fd5b-48b4-8917-b2c75b7ee9a7                
    sdd                                                                                                             
    |-sdd1    linux_raid_member 0.90.0                          2a4fc966-b266-152a-a95d-b8568abffd89                
    | `-md125 ext3              1.0                             00da1ca2-853a-425a-9d24-5b6b781853b7                
    |-sdd2    linux_raid_member 1.0              5              d4116d1c-5a51-908b-449c-f5d3d7757823                
    | `-md127 swap              1                               758bf748-4844-49d9-b996-5926c0f951c8                
    |-sdd3    linux_raid_member 0.90.0                          824fdb0c-1b9c-413d-d990-7eb5c420603e                
    | `-md126 ext4              1.0                             2c70f0c7-9dcf-40bb-8a6d-69a0e4dece53                
    `-sdd4    linux_raid_member 0.90.0                          060a74d9-e9fd-f8d0-f1df-d4da5ea31708                
      `-md124 ext3              1.0                             2df61ff4-fd5b-48b4-8917-b2c75b7ee9a7                
    sde                                                                                                             
    |-sde1    linux_raid_member 0.90.0                          2a4fc966-b266-152a-a95d-b8568abffd89                
    | `-md125 ext3              1.0                             00da1ca2-853a-425a-9d24-5b6b781853b7                
    |-sde2    linux_raid_member 1.0              5              d4116d1c-5a51-908b-449c-f5d3d7757823                
    | `-md127 swap              1                               758bf748-4844-49d9-b996-5926c0f951c8                
    |-sde3    linux_raid_member 0.90.0                          824fdb0c-1b9c-413d-d990-7eb5c420603e                
    | `-md126 ext4              1.0                             2c70f0c7-9dcf-40bb-8a6d-69a0e4dece53                
    `-sde4    linux_raid_member 0.90.0                          060a74d9-e9fd-f8d0-f1df-d4da5ea31708                
      `-md124 ext3              1.0                             2df61ff4-fd5b-48b4-8917-b2c75b7ee9a7                
    sdf       iso9660           Joliet Extension ARCH_202110    2021-10-01-17-00-36-00                              
    |-sdf1    iso9660           Joliet Extension ARCH_202110    2021-10-01-17-00-36-00                     0   100% /run/archiso/bootmnt
    `-sdf2    vfat              FAT16            ARCHISO_EFI    EEA4-A93D                                           
    sdg                                                                                                             
    |-sdg1    ext2              1.0                             aabf92c8-14ce-46a8-9af2-9e27b1a5412e                
    |-sdg2    ext2              1.0              QTS_BOOT_PART2 fb8691c0-9b2a-49f4-89c4-03aabe16fa1f                
    |-sdg3    ext2              1.0              QTS_BOOT_PART3 5247d82d-b11e-48c1-ab9d-adaf87488239                
    |-sdg4                                                                                                          
    |-sdg5    ext2              1.0                             ba5d6eee-24a9-43bc-b861-8f4ebfbcd2c7                
    `-sdg6    ext2              1.0                             585712c4-ff3c-4ac1-a60c-4f06e9544d73                
    

    As you can see, the existing disk partitioning is quite complicated. The interesting bits are that I have five mechanical Serial ATA hard drives, /dev/sd[a-e]. The Arch ISO USB drive is /dev/sdf, while there's another USB drive on /dev/sdg. /dev/sdg is the internal USB-attached flash storage, as seen by the QTS_BOOT_PART[23] label.
    This mdadm RAID array is very old, and it appears to be detected automatically by the Linux kernel on the booted Arch ISO (version 5.14.8) as four md devices (/dev/md12[4-7]). I'm not sure how these are assembled into one disk, the QNAP Linux OS is not laid out in a standard Linux Standards Base (LSB) fashion.

  9. Rather than spending hours trying to remount these in Arch, I will repartition them fresh. First, I used madadm --manage --stop on all of the md devices created by the kernel:

    # for md in /dev/md12{4..7}; do
        mdadm --manage --stop ${md}
    done
    
  10. Next, I used sgdisk --zap-all to delete the partition table on /dev/sd[a-eg].

    # for disk in /dev/sd{a..e} /dev/sdg; do
        sgdisk --zap-all ${disk}
    done
    
  11. I ran partprobe with no options, to ensure the kernel had the latest, clean partition table for all block devices. lsblk -f showed the following:

    NAME   FSTYPE   FSVER            LABEL       UUID                                 FSAVAIL FSUSE% MOUNTPOINTS
    loop0  squashfs 4.0                                                                     0   100% /run/archiso/airootfs
    sda                                                                                              
    sdb                                                                                              
    sdc                                                                                              
    sdd                                                                                              
    sde                                                                                              
    sdf    iso9660  Joliet Extension ARCH_202110 2021-10-01-17-00-36-00                              
    |-sdf1 iso9660  Joliet Extension ARCH_202110 2021-10-01-17-00-36-00                     0   100% /run/archiso/bootmnt
    `-sdf2 vfat     FAT16            ARCHISO_EFI EEA4-A93D                                           
    sdg                                                                                              
    
  12. Before continuing, I followed the Arch Linux Installation Guide. Since this host had been on the network, and had a static DHCP lease set up on my router, the IP address, gateway, and DNS information was already assigned on boot:

    # ip a s
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
         link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
         inet 127.0.0.1/8 scope host lo
           valid_lft forever preferred_lft forever
         inet6 ::1/128 scope host
           valid_lft forever preferred_lft forever
    2: enp2s0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc fq_codel state DOWN group default qlen 1000
        link/ether 00:08:9b:c5:26:0e brd ff:ff:ff:ff:ff:ff
    3: enp3s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
        link/ether 00:08:9b:c5:26:0d brd ff:ff:ff:ff:ff:ff
        inet 10.20.30.11/24 metric 100 brd 10.20.30.255 scope global dynamic enp3s0
           valid_lft 488sec preferred_lft 488sec
        inet6 fe80::208:9bff:fec5:260d/64 scope link
            valid_lft forever preferred_lft forever
    

    This host had the hostname sodium previously, which I intended to keep. My local network is 10.20.30.0/24, and most of the hosts on my network take chemical element names. Incidentally 11 is the atomic number of sodium, so it has the host address 10.20.30.11. Yeah, I'm a geek. This was all set up automatically, and I intended to keep this once we get to the network setup.

  13. The next step was to ensure the time on the system was set up correctly, using timedatectl. Unfortunately, unlike most Linux systems, QNAP had the hardware/RTC clock set to local time, not Universal Coordinated Time (UTC [it's a French acronym]). I determined this with the date command, which didn't have the correct time (it looks like it drifted forward eight minutes having been powered off for several months). To fix it, I ran the following:

    # date
    Sat Oct  2 08:39:49 PM UTC 2021
    # timedatectl set-ntp true
    # timedatectl set-local-rtc false --adjust-system-clock
    # timedatectl set-timezone America/New_York
    # date
    Sat Oct  2 08:31:23 PM EDT 2021
    
  14. Next step was to partition the disks. I had already listed a concise listing with lsblk -f above. Below is the output of fdisk -l:

    Disk /dev/sda: 1.82 TiB, 2000398934016 bytes, 3907029168 sectors
    Disk model: SAMSUNG HD204UI 
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    
    
    Disk /dev/sdb: 1.82 TiB, 2000398934016 bytes, 3907029168 sectors
    Disk model: SAMSUNG HD204UI 
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    
    
    Disk /dev/sdc: 1.82 TiB, 2000398934016 bytes, 3907029168 sectors
    Disk model: SAMSUNG HD204UI 
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    
    
    Disk /dev/sdd: 1.82 TiB, 2000398934016 bytes, 3907029168 sectors
    Disk model: SAMSUNG HD204UI 
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    
    
    Disk /dev/sde: 1.82 TiB, 2000398934016 bytes, 3907029168 sectors
    Disk model: SAMSUNG HD204UI 
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    
    
    Disk /dev/sdf: 57.62 GiB, 61865984000 bytes, 120832000 sectors
    Disk model: USB Flash Disk  
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disklabel type: dos
    Disk identifier: 0x8ab3e64f
    
    Device     Boot   Start     End Sectors  Size Id Type
    /dev/sdf1  *         64 1556479 1556416  760M  0 Empty
    /dev/sdf2       1556480 1732607  176128   86M ef EFI (FAT-12/16/32)
    
    
    Disk /dev/sdg: 492 MiB, 515899392 bytes, 1007616 sectors
    Disk model: USB DISK MODULE 
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    
    
    Disk /dev/loop0: 673.11 MiB, 705802240 bytes, 1378520 sectors
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    
    

    My next step is to partition /dev/sdg. Since this system was so old, I chose not to use a GPT partition table (using an MS-DOS partition table instead) in case the QNAP BIOS couldn't handle the GPT table directly. I used parted to do this with one command:

    # parted --align optimal --script -- /dev/sdg mklabel msdos mkpart primary 2048s 100% set 1 boot on
    # fdisk -l /dev/sdg
    Disk /dev/sdg: 492 MiB, 515899392 bytes, 1007616 sectors
    Disk model: USB DISK MODULE
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disklabel type: dos
    Disk identifier: 0xe19d9fd2
    
    Device     Boot Start     End Sectors  Size Id Type
    /dev/sdg1  *     2048 1007615 1005568  491M 83 Linux
    
  15. Next, I partitioned the remaining disks. I chose two partitions for the disks, one for software/md RAID6 (for swap space), and the other for Btrfs:

    # for disk in /dev/sd{a..e}; do
        parted --align optimal --script -- \
            ${disk} mklabel gpt \
            mkpart swap linux-swap 2048s 512MiB \
            mkpart data btrfs 512MiB 100%
    done
    

    The reason I did this is because Btrfs does not support swap files on multi-device filesystems currently.

  16. Next, combine the swap disks into a RAID6 array with mdadm, then create the swap partition:

    # mdadm --create /dev/md0 \
        --raid-devices=4 --spare-devices=1 --level=6 /dev/sd{a..e}1
    mdadm: Defaulting to version 1.2 metadata
    mdadm: array /dev/md0 started.
    # mkswap /dev/md0
    Setting up swapspace version 1, size = 1018 MiB (1067446272 bytes)
    no label, UUID=6a88dd29-dab7-4b27-b6bf-c5f05f2ffe63
    # swapon
    
  17. Create the Btrfs array, with data and metadata in RAID10 configuration:

    # mkfs.btrfs --data raid10 --metadata raid10 /dev/sd{a..d}2
    btrfs-progs v5.14.1
    See http://btrfs.wiki.kernel.org for more information.
    
    Label:              (null)
    UUID:               4e7ceed0-d011-4d22-9207-f30155447972
    Node size:          16384
    Sector size:        4096
    Filesystem size:    7.28TiB
    Block group profiles:
      Data:             RAID10            2.00GiB
      Metadata:         RAID10          512.00MiB
      System:           RAID10           16.00MiB
    SSD detected:       no
    Zoned device:       no
    Incompat features:  extref, skinny-metadata
    Runtime features:
    Checksum:           crc32c
    Number of devices:  4
    Devices:
       ID        SIZE  PATH
        1     1.82TiB  /dev/sda2
        2     1.82TiB  /dev/sdb2
        3     1.82TiB  /dev/sdc2
        4     1.82TiB  /dev/sdd2
    
    
  18. Next step is to mount the disks:

    # mount /dev/sda2 /mnt
    # mkdir /mnt/boot
    # mount /dev/sdg1 /mnt/boot
    

    Note, you only need to specify one of the member devices when mounting a Btrfs filesystem.

  19. Now the installation can begin in earnest. The first step is to select appropriate mirrors with reflector:

    # reflector --age 12 --country US --fastest 5 --protocol https
    
  20. After this I bootstrap the base installation with pacstrap:

    # pacstrap /mnt base linux linux-firmware btrfs-progs vim borg openssh smartmontools postfix logwatch mdadm
    
  21. Then I run genfstab:

    # genfstab -U /mnt >> /mnt/etc/fstab
    

    Looking at /mnt/etc/fstab, I see that it didn't pick up the swap partition (possibly because I didn't include mdadm to pacstrap originally), so I added it. My /mnt/etc/fstab looks like this:'

    # /dev/sda2
    UUID=4e7ceed0-d011-4d22-9207-f30155447972       /               btrfs           rw,relatime,space_cache,subvolid=5,subvol=/     0 0
    
    # /dev/sdg1
    UUID=6a5ee83a-c37b-453f-b574-712f24ecc801       /boot           ext4            rw,relatime     0 2
    
    # /dev/md0 (swap)
    UUID=6a88dd29-dab7-4b27-b6bf-c5f05f2ffe63       none            swap            defaults        0 0
    
    
  22. Now it's time to enter the Arch chroot with arch-chroot /mnt. At this point, I'm at step 3.3 of the Arch Linux Installation Guide. It's pretty straightforward, I'm not going to reproduce each step of the guide. I did have to add mdadm_udev to the HOOKS of /etc/mkinitcpio.conf, so the swap partition would be picked up on boot. For the network manager I went with systemd-networkd, since it's installed by default. For the bootloader, I went with GRUB, since this is a BIOS system and it's what I'm most familiar with.

  23. Now it's time to reboot. Before I do, I exit the chroot with Ctrl-D/exit, and copy the SSH host keys to /mnt/etc/ssh/ before I continue. Since I did this remotely from my laptop, I also copied the contents of /root/.ssh to /mnt/root/ (mainly to include my authorized_keys file, so I can log in without a password).

RESULTS

Success!  It booted on the first try.  Three things I forgot to do was set up the systemd-networkd configuration file, enable systemd-resolved, and enable sshd:

/etc/systemd/network/20-wired.network

[Match]
Name=enp3s0

[Network]
DHCP=yes
# systemctl restart systemd-networkd.service
# systemctl enable --now systemd-resolved.service
# ln -sf /run/systemd/resolve/stub-resolv.conf /etc/resolv.conf
# systemctl enable --now sshd.service

Now, it's ready to be managed remotely!

POST INSTALLATION THOUGHTS

If I were planning on using this system as an actual storage device, I probably should have been more mindful about the Btrfs setup.  Namely, setting up  a subvolume for the root filesystem, instead of the top-level subvolume.  Now, for this system I'm not really using it as primary data storage, so this doesn't really matter.  Assuming the disks are healthy enough, I will use it as a staging area for testing restoring my Borg Backup.

Now that I've gotten my first SMART report, these disks may have a firmware bug that could lead to data loss.  I do remember updating the firmware several years ago, precisely because of this bug.  Unfortunately smartd doesn't seem to be able to tell if the patched firmware was applied.  It does provide a couple of links to more information, I'll need to read them and see if I can do anything about it.

NEXT STEPS

First order of business is to review the Borg Backup documentation linked above, and determine exactly how restores work. Make sure I can restore all of the systems I've been backing up to my Dell R730 (which I've named tennessine, in honor of my move to Tennessee). There's about 2.6T worth of deduplicated, compressed data on tennessine, which happens to be 1.1T less than how much is free on this QNAP NAS (sodium, chemical symbol Na). So, restored this data will be much larger , I'll have to restore one host at a time. This is a process which will take several weeks, time permitting.

Once I'm comfortable restoring data on my local network using Borg, I can begin backing up things in the cloud. I've chosen Backblaze for this purpose, but I'm not quite there yet. I haven't decided if I will continue using the duplicity tool which I've used for Backblaze thus far, or if I will use something else. Perhaps I can wait for a Borg->Backblaze integration, maybe using something like sshfs to link Borg to Backblaze (as far as I'm aware Borg can only work on locally mounted filesystems, or over SSH/SFTP). To my knowledge the tooling isn't there yet, but I will investigate later once I'm ready.