My new semi-DIY NAS build

Continuing the discussion from Non flashlight products you recommend.:

This probably won’t be interesting to the majority of readers, but for those who are in the market it’s an interesting option.

I finally bit the bullet and upgraded my NAS. I was using an ancient and incredibly slow LaCie NAS that the manufacturer threw over the fence and quickly stopped releasing OS updates for it. But I gave it a new lease on life by following some sketchy wiki instructions to hack the bootloader and do a headless net install of an old version of Debian Linux, and from there installed several major Debian version upgrades over that, and finally installed OpenMediaVault on top of that. So thanks entirely to the open source ecosystem the software situation was good for the old LaCie NAS. But it was extremely slow and S.M.A.R.T. was reporting a few bad sectors on both disks (which I had running in JBOD mode and Btrfs RAID1 on top of them).

I’ve been looking for a replacement for years, but I’ve never been too keen on getting stuck with vendor lock-in again by going with an out-of-the-box commercial NAS. Once I have it all up and running I basically want to set it and forget it, but I do want enough flexibility to install random Linux packages and/or make arbitrary low-level system configurations without a bunch of layers of software abstraction getting in my way. But as far as hardware goes I didn’t want a mess of cables or external drives, and in general I’m really bad with hardware. Of course I could simply use an old PC or laptop, but the power consumption would be considerably higher than with an ARM chip.

So I finally went with this semi-easy DIY 2-bay NAS enclosure for the Raspberry Pi 4:

I also bought a 4GB Raspberry Pi 4 for $63, which is probably a bit marked up, but I didn’t want to deal with sketchy unknown vendors.
https://www.raspberrypi.com/products/raspberry-pi-4-model-b/

For the OS storage I got a high-endurance 128GB Samsung MicroSD card for only $12, which seemed like a great deal for a known brand with high write capacity.

And finally I decided to go for two 1TB SanDisk Ultra 3D SSD drives, which appear to be a solid mid-grade drive with good write durability. I’m excited to have a NAS with SSDs due to the much lower power usage and no spin-up time compared to spinning rust drives. I have no idea how, but these drives were going for $45/each when I bought them, but currently they’re up to $85/ea, so I guess I got really lucky.


So I received everything and started to put it together with this video:

Since I’m really bad with hardware I had to watch it many times and found that many parts of it weren’t clear and generally went way too fast for me. The Geekworm NASPi was missing one of the 4 screws for fixating the drives to the board, which was irritating, but not really necessary since the SSDs don’t weigh anything compared to HDDs. Also the DC power adapter that I purchased as part of a kit with the NASPi from Amazon arrived broken and hacked up into two pieces with pigtail connectors from the previous owner, but I’ll chalk that up to typical Amazon stupidity. (They refunded me the price of the power adapter.) Fortunately the output from the old LaCiE NAS power adapter also works fine with the NASPi.

So once the assembly was over I could finally get into the more enjoyable part for me. I used the Raspberry Pi Imager to flash the latest minimal no-GUI image to the MicroSD, and I was happy to see that the Imager allowed me to set the username and password or SSH keys, since I don’t have any external HDMI monitor or keyboard and I would only have SSH access. It booted right up and the drives powered on correctly, and I just had to look at the DHCP leases in my router to find the IP address, which I later changed to a fixed IP. Next I installed OpenMediaVault, and I also loosely followed HTGWA: Create a ZFS RAIDZ1 zpool on a Raspberry Pi | Jeff Geerling to create a ZFS pool that mirrors the two disks for fault tolerance. I’m quite excited to finally have my NAS data on a ZFS filesystem, and within the constraints of my relatively slow LAN performance seems more than acceptable with the 4GB Raspberry Pi 4. The SMB shares connect and populate the file lists almost instantly, and my Borg backup repositories on the new NAS are much more performant than with the old dinosaur.

So I think this is a pretty cool option. I like how the Raspberry Pi supports a large number of operating systems, including even FreeBSD, and it’s such a massively popular device that long-term community support and development is guaranteed. I also really like the modularity that lets me fairly easily and inexpensively swap out any component in the future, 1) the Raspberry Pi, 2) the MicroSD for the OS, 3) either or both of the data drives.


Update: 2025-06-03

So far I don’t really have any complaints with the hardware; the Raspberry Pi 4 and storage devices are working well, and the Geekworm components have been quiet and reliable.

However, on the software side there was a fairly major issue. Since I wanted a nice NAS-focused web administration interface I had chosen OpenMediaVault, which can be installed from a repository of packages on top of a Debian Stable system. Since the Raspberry Pi OS is also based on Debian I assumed that it would be a fairly vanilla Debian Stable system plus some out-of-the-box tools and hardware compatibility tweaks for the Pi, so that was the base OS that I originally chose to flash to the storage card. However, after the installation and configuration of the NAS I later realized that Raspberry Pi OS uses its own Linux kernel version and update mechanism, and in general it differs quite a bit from the upstream Debian Stable system that I prefer to run.

This came back to bite me 6 months later when the Raspberry Pi OS repository upgraded to a major new Linux kernel version. The problem is that I am using ZFS for the data storage pool. I knew beforehand that Debian does not ship pre-compiled ZFS modules, only supporting “DKMS” modules that have to be compiled locally on the system with matching kernel headers for each kernel update. On the Pi the ZFS module compilation takes forever, which isn’t ideal, but the worst part is that the Raspberry Pi OS doesn’t promptly release matching kernel headers and/or the Debian Stable version of the ZFS driver won’t compile for the updated kernel version in Raspberry Pi OS. So 6 months after the initial installation and again last month I was faced with a system that had no access to my ZFS storage pool after applying standard system updates from the Raspberry Pi OS repositories. The most recent failure was worse than the first, as I could not find any combination of kernels and kernel headers and ZFS packages that would compile the ZFS module for the current state of Raspberry Pi OS. So after a considerable number of failed attempts I decided to ditch Raspberry Pi OS and look for something different.

My first inclination was to just use vanilla Debian Stable for ARM64 from https://raspi.debian.net with some tweaks from https://github.com/emojifreak/debian-rpi-image-script/blob/main/debian-rpi-sd-builder.sh manually applied after flashing it to the SD card. This resulted in a bootable Debian Stable system that was nice and lightweight. However, the Geekworm NAS scripts for the power switch and fan control didn’t work with Debian’s version of gpiod. After a lot of trial and error I was eventually able to hack together working power and fan scripts with some help from https://github.com/aardzhanov/naspi35_xscript, and this would have been a viable and reliable option as a base system for the Raspberry Pi, and on top of that I could have installed OpenMediaVault like I had before.

But I got to thinking that ideally I don’t want to be waiting around for the system to compile the ZFS modules locally with the Raspberry Pi’s weak CPU. So I started looking at other Linux distros with good support for the Raspberry Pi and native pre-compiled ZFS kernel modules straight from their package repositories. I also realized that it would be very useful to be able to boot from a USB storage device like on a normal desktop PC and not be limited to booting from the difficult-to-remove SD card inside the Geekworm NAS case for the Raspberry Pi. It turns out that it’s easy to add USB boot support by using the graphical Raspberry Pi Imager utility to flash a temporary updater that automatically boots off the SD card and installs an EEPROM update, as described here: GitHub - raspberrypi/rpi-eeprom: Installation scripts and binaries for the Raspberry Pi 4 and Raspberry Pi 5 bootloader EEPROMs. I then used the Raspberry Pi Imager to flash the Alpine Linux installer onto a USB stick and then manually added this tweak to the USB stick for installation onto the SD card via SSH on a headless system like mine.

So Alpine Linux with its minimalist design and pre-compiled ZFS modules would also be a good option for the Raspberry Pi, but options for a graphical administration interface through a web browser are limited for Alpine. I’m capable of administering a Linux system via the command line, but I don’t enjoy doing so and I don’t want that hassle for a NAS that needs to just work as an appliance. So in the end I finally settled on Ubuntu 24.04 LTS. It can also be easily flashed onto the SD card via the Raspberry Pi Imager, and the Geekworm Geekworm NAS scripts for the power switch and fan control basically just work out-of-the-box. Unfortunately the default Ubuntu image for the Pi is quite bloated out-of-the-box, most notably due to the Snap packaging system that they promote, but I was able to remove it along with quite a few other unneeded packages and services. This left me with a fairly lean base system with solid pre-compiled support for ZFS. For the graphical administration component OpenMediaVault doesn’t support Ubuntu, but I had been testing Webmin and found the most recent versions to be much more comprehensive and pleasant to use than the older versions that I had tried many times. (I chose to install Webmin via its manual installation script to /usr/local/webmin/ by downloading and decompressing a .tar.gz file, because I want to be able to easily upgrade to the latest versions that come directly from Webmin without worrying about Ubuntu package dependencies that would come with the DEB packaged version of Webmin). Although it’s not an out-of-the-box NAS experience I was able to install and configure fairly easily the Samba server the way I wanted it to work via Webmin, and I also installed the borgbackup package from the Ubuntu repos for backing up various computers onto a repository on the NAS. Webmin also allows for configuring automatic unattended system upgrades and/or manual package management operations from its interface, as well as other common system administration and monitoring tasks. So overall I’m pretty happy with how the new Ubuntu+Webmin NAS is running. It’s taken a lot of experimentation with all the options to arrive at this ideal system for me, but if I or anybody else wanted to set up a similar system like this again it could be easily achieved with a minimal number of steps.

10 Thanks

Nice.

My NAS is massive overkill and probably worthy of its own thread, it’s some repurposed server hardware in a 4U chassis with hotswap bays running TrueNAS with 2 zpools for >60TB usable storage a well as a kubernetes cluster where I run things like my media server, git server, and random projects.

2 Thanks

Sweet! I really wish TrueNAS supported the Pi. I love OpenMediaVault, but TrueNAS is the bee’s knees.

Yeah, I understand why they don’t in terms of the hardware being a bit limiting (although my test setup worked fine in a VM with 4GB of memory before I did it live).

I think one of the key reasons is TrueNAS is so ZFS-centric, and “there are two kinds of ZFS user: Those who use ECC memory, and those who haven’t lost all their data yet”.

I did some pretty heavy research into a homemade NAS recently. I would be using it to backup lossless copies of my blu-ray collection in the event that blu-ray players become overly expensive or scarce in the future.

For that I’d want RAID 1+0 or RAID 6. I’d also want 50+ terabytes.

I was looking at a JONSBO N1 case, and something like this for the motherboard.

 

Ultimately I concluded that the cost isn’t justified yet, but maybe sometime in the future.

:wink: Ouch!

This is actually my first ZFS pool. I like OpenMediaVault enough to prefer it over FreeBSD, despite the fact that the Debian base that OMV needs only supports ZFS via a DKMS module. I don’t really like my filesystem to depend on a DKMS module, but Debian Stable doesn’t update very frequently and I’ll still be able to access the OS on EXT4 to fix any breakage no matter what.

Interesting. Are there any case options available for that?

Personally my NAS needs are very basic, I just need around 1TB of storage with 1 redundant drive. My main priority is software independence and hardware modularity.

If you want 60+TB, I’d be very careful with RAID5 or 6, at that point you’re just asking for a multi-disk failure, because when one drive fails, if the array is under any normal read/write load you might be looking at a rebuild time in the weeks, during which time if a second drive fails it will result in data loss.
You can get 60TB with a wide RAID10 array, although at that scale then I’d at the very least want a hot spare or two, and possibly a 3 wide mirror instead of 2 wide; if you want to use RAID6 specifically then I’d probably create multiple smaller arrays to keep rebuild time manageable. Also either make sure your controller does a background scrub, or implement one yourself.

Good read: How multi-disk failures happen - SysAdmin1138 Explains

For storing not important data ssd is fine, I rather stick to CMR drives.
Mike

1 Thank

Yeah, SMR drives are an incredibly bad idea for any kind of RAID.

3 Thanks

The motherboard is Mini-ITX form factor so it should fit in any Mini-ITX case. I was looking at the JONSBO only because it was very compact and I was trying not to build a mammoth machine if possible.

You are acquainted with pcpartpicker?

1 Thank

I’ve never used it personally, as I’m really not into building stuff from scratch.

It’s a tough decision. A raid 10 array can handle multiple failed drives as long as they are not part of the same volume. RAID 6 can handle any two I believe? But more than two is and it is kaput.

Seems like a good idea to buy many different brands and models (of hard drive)

Well, PCPARTPICKER will make building easier because it filters out incompatible parts. In this case, that oddball motherboard I found doesn’t show up on the website, but you can still go to products>cases and then filter for mini-itx to see what is available.

1 Thank

Awesome setup! And I think you may have underestimated how many readers will find it interesting.

SBCs are great for their practicality, small size, and low power draw. I got an ODROID-HC2 (discontinued now) years ago for a very simple backup/NAS solution, with Armbian+OpenMediaVault and BorgBackup. There’s even a config file option (in Armbian, I think) to schedule automatic update and reboots, which has made it a literal ‘set it and forget it’ setup.

Literally, I’ve forgotten lol I need to refresh my memory because I will need to upgrade to a newer Armbian+OMV release… at some point, for security considerations. It’s still working fine for now. OSS means having lots of choices and not condemning the hardware due to the manufacturer no longer supporting it. Lots of appreciation and support is warranted to the volunteers who maintain valuable projects like Armbian, OMV, and others.

The HC2 was appealing because it was made specifically for better network and storage performance compared to general purpose SBCs of the time. The tradeoff is that it is headless with no video output. I was shocked that the WD 10TB HDD was benchmarking at over 200MB/s ! HDDs have come a long way. SSDs are great if you can swing it, but HDDs are still great for the right setup, as other factors are often the bottleneck.

Two downside of the HC2 are 1) its processor is ARMv7 i.e. 32-bit, which is the biggest factor limiting it’s longevity as many distros are dropping 32-bit support, and 2) it has no wake-on-LAN support, which is a bit of a bummer. The next purchase will surely address those points. The rapid evolution and increased selection of SBCs and micro computers opens up lots of interesting possibilities.

1 Thank

Interesting project, sb. I think I will stay with my Synology DS218+ for the foreseeable future. It does what we ask of it. It has been a reliable device for me and every so often I make separate backups to an external stand alone 3.5" HDD. Our son runs a similar setup and we try to keep copies of each others daya at each others homes.

1 Thank

I was all into NASes and then said screw it and just went usb, as I only have 1 drive active/powered at most, and treat them as WORM drives (write once, read many). It’s only when offlining stuff, or wanting to retrieve something, that I power up any of the drives.

I’m not terribly concerned with speed, just storage/density, so SMR is fine, at least for me. They’re more… archival.

What I’d like to do is just get 2-3 Uberdrives, copy what I got on “smaller” drives, basically just migrating everything to the big’uns, if only as safe-copies. Just copying 2/4/8TB at a time would take weeks, probably.

I’m somewhat interesting in have my own NAS.
If I were to have one, it would be really simple and probably just need one hard drive, and I would use it to share movies with my family. :slightly_smiling_face:

Until recently, my “NAS” was a raspberry pi 4 with a 4TB HDD attached.

I’ve since moved to running Unraid on a box assembled from a variety of “hand-me-down” parts - friends, family, and my own old PC as donors.

This is definitely a solid option for functionality and the price is right. I just didn’t want to deal with the additional noise, size, and power consumption.

2 Thanks

Yeah. You can have hot spares, and on some controllers you can add a second parity set, but in general, it’s 2 failure tolerant if there’s a single parity block. The issue is that with large arrays, then you end up with rebuild times in the weeks, and weeks of intensive 100% I/O on a drive that’s ageing out (especially desktop grade) makes additional drive failures way more likely. With RAID10, the default is a 2 wide mirror (i.e. always survives a single failure, can then also survive a second failure on the same striped set, but not a failure on the other mirror) but again with some controllers it is possible to create it with 3 or more mirrors. Also, as soon as there is even a single failure then RAID6 has massive performance degradation due to the need to read from every array member at once for every single read.

OTOH, once a volume starts to get larger than 20-30TB, I would never use anything but ZFS anyway, just because the failure modes start to get very complex, and both the chance of additional failures during a rebuild increases, and also the damage done by an array failure also increases. >30TB I would only trust ZFS, and then only with RAIDZ2, RAIDZ3, or mirror vdevs.

1 Thank