My new semi-DIY NAS build

Continuing the discussion from Non flashlight products you recommend.:

This probably won’t be interesting to the majority of readers, but for those who are in the market it’s an interesting option.

I finally bit the bullet and upgraded my NAS. I was using an ancient and incredibly slow LaCie NAS that the manufacturer threw over the fence and quickly stopped releasing OS updates for it. But I gave it a new lease on life by following some sketchy wiki instructions to hack the bootloader and do a headless net install of an old version of Debian Linux, and from there installed several major Debian version upgrades over that, and finally installed OpenMediaVault on top of that. So thanks entirely to the open source ecosystem the software situation was good for the old LaCie NAS. But it was extremely slow and S.M.A.R.T. was reporting a few bad sectors on both disks (which I had running in JBOD mode and Btrfs RAID1 on top of them).

I’ve been looking for a replacement for years, but I’ve never been too keen on getting stuck with vendor lock-in again by going with an out-of-the-box commercial NAS. Once I have it all up and running I basically want to set it and forget it, but I do want enough flexibility to install random Linux packages and/or make arbitrary low-level system configurations without a bunch of layers of software abstraction getting in my way. But as far as hardware goes I didn’t want a mess of cables or external drives, and in general I’m really bad with hardware. Of course I could simply use an old PC or laptop, but the power consumption would be considerably higher than with an ARM chip.

So I finally went with this semi-easy DIY 2-bay NAS enclosure for the Raspberry Pi 4:

I also bought a 4GB Raspberry Pi 4 for $63, which is probably a bit marked up, but I didn’t want to deal with sketchy unknown vendors.

For the OS storage I got a high-endurance 128GB Samsung MicroSD card for only $12, which seemed like a great deal for a known brand with high write capacity.

And finally I decided to go for two 1TB SanDisk Ultra 3D SSD drives, which appear to be a solid mid-grade drive with good write durability. I’m excited to have a NAS with SSDs due to the much lower power usage and no spin-up time compared to spinning rust drives. I have no idea how, but these drives were going for $45/each when I bought them, but currently they’re up to $85/ea, so I guess I got really lucky.


So I received everything and started to put it together with this video:

Since I’m really bad with hardware I had to watch it many times and found that many parts of it weren’t clear and generally went way too fast for me. The Geekworm NASPi was missing one of the 4 screws for fixating the drives to the board, which was irritating, but not really necessary since the SSDs don’t weigh anything compared to HDDs. Also the DC power adapter that I purchased as part of a kit with the NASPi from Amazon arrived broken and hacked up into two pieces with pigtail connectors from the previous owner, but I’ll chalk that up to typical Amazon stupidity. (They refunded me the price of the power adapter.) Fortunately the output from the old LaCiE NAS power adapter also works fine with the NASPi.

So once the assembly was over I could finally get into the more enjoyable part for me. I used the Raspberry Pi Imager to flash the latest minimal no-GUI image to the MicroSD, and I was happy to see that the Imager allowed me to set the username and password or SSH keys, since I don’t have any external HDMI monitor or keyboard and I would only have SSH access. It booted right up and the drives powered on correctly, and I just had to look at the DHCP leases in my router to find the IP address, which I later changed to a fixed IP. Next I installed OpenMediaVault, and I also loosely followed HTGWA: Create a ZFS RAIDZ1 zpool on a Raspberry Pi | Jeff Geerling to create a ZFS pool that mirrors the two disks for fault tolerance. I’m quite excited to finally have my NAS data on a ZFS filesystem, and within the constraints of my relatively slow LAN performance seems more than acceptable with the 4GB Raspberry Pi 4. The SMB shares connect and populate the file lists almost instantly, and my Borg backup repositories on the new NAS are much more performant than with the old dinosaur.

So I think this is a pretty cool option. I like how the Raspberry Pi supports a large number of operating systems, including even FreeBSD, and it’s such a massively popular device that long-term community support and development is guaranteed. I also really like the modularity that lets me fairly easily and inexpensively swap out any component in the future, 1) the Raspberry Pi, 2) the MicroSD for the OS, 3) either or both of the data drives.

10 Thanks

Nice.

My NAS is massive overkill and probably worthy of its own thread, it’s some repurposed server hardware in a 4U chassis with hotswap bays running TrueNAS with 2 zpools for >60TB usable storage a well as a kubernetes cluster where I run things like my media server, git server, and random projects.

2 Thanks

Sweet! I really wish TrueNAS supported the Pi. I love OpenMediaVault, but TrueNAS is the bee’s knees.

Yeah, I understand why they don’t in terms of the hardware being a bit limiting (although my test setup worked fine in a VM with 4GB of memory before I did it live).

I think one of the key reasons is TrueNAS is so ZFS-centric, and “there are two kinds of ZFS user: Those who use ECC memory, and those who haven’t lost all their data yet”.

I did some pretty heavy research into a homemade NAS recently. I would be using it to backup lossless copies of my blu-ray collection in the event that blu-ray players become overly expensive or scarce in the future.

For that I’d want RAID 1+0 or RAID 6. I’d also want 50+ terabytes.

I was looking at a JONSBO N1 case, and something like this for the motherboard.

 

Ultimately I concluded that the cost isn’t justified yet, but maybe sometime in the future.

:wink: Ouch!

This is actually my first ZFS pool. I like OpenMediaVault enough to prefer it over FreeBSD, despite the fact that the Debian base that OMV needs only supports ZFS via a DKMS module. I don’t really like my filesystem to depend on a DKMS module, but Debian Stable doesn’t update very frequently and I’ll still be able to access the OS on EXT4 to fix any breakage no matter what.

Interesting. Are there any case options available for that?

Personally my NAS needs are very basic, I just need around 1TB of storage with 1 redundant drive. My main priority is software independence and hardware modularity.

If you want 60+TB, I’d be very careful with RAID5 or 6, at that point you’re just asking for a multi-disk failure, because when one drive fails, if the array is under any normal read/write load you might be looking at a rebuild time in the weeks, during which time if a second drive fails it will result in data loss.
You can get 60TB with a wide RAID10 array, although at that scale then I’d at the very least want a hot spare or two, and possibly a 3 wide mirror instead of 2 wide; if you want to use RAID6 specifically then I’d probably create multiple smaller arrays to keep rebuild time manageable. Also either make sure your controller does a background scrub, or implement one yourself.

Good read: How multi-disk failures happen - SysAdmin1138 Explains

For storing not important data ssd is fine, I rather stick to CMR drives.
Mike

1 Thank

Yeah, SMR drives are an incredibly bad idea for any kind of RAID.

3 Thanks

The motherboard is Mini-ITX form factor so it should fit in any Mini-ITX case. I was looking at the JONSBO only because it was very compact and I was trying not to build a mammoth machine if possible.

You are acquainted with pcpartpicker?

1 Thank

I’ve never used it personally, as I’m really not into building stuff from scratch.

It’s a tough decision. A raid 10 array can handle multiple failed drives as long as they are not part of the same volume. RAID 6 can handle any two I believe? But more than two is and it is kaput.

Seems like a good idea to buy many different brands and models (of hard drive)

Well, PCPARTPICKER will make building easier because it filters out incompatible parts. In this case, that oddball motherboard I found doesn’t show up on the website, but you can still go to products>cases and then filter for mini-itx to see what is available.

1 Thank

Awesome setup! And I think you may have underestimated how many readers will find it interesting.

SBCs are great for their practicality, small size, and low power draw. I got an ODROID-HC2 (discontinued now) years ago for a very simple backup/NAS solution, with Armbian+OpenMediaVault and BorgBackup. There’s even a config file option (in Armbian, I think) to schedule automatic update and reboots, which has made it a literal ‘set it and forget it’ setup.

Literally, I’ve forgotten lol I need to refresh my memory because I will need to upgrade to a newer Armbian+OMV release… at some point, for security considerations. It’s still working fine for now. OSS means having lots of choices and not condemning the hardware due to the manufacturer no longer supporting it. Lots of appreciation and support is warranted to the volunteers who maintain valuable projects like Armbian, OMV, and others.

The HC2 was appealing because it was made specifically for better network and storage performance compared to general purpose SBCs of the time. The tradeoff is that it is headless with no video output. I was shocked that the WD 10TB HDD was benchmarking at over 200MB/s ! HDDs have come a long way. SSDs are great if you can swing it, but HDDs are still great for the right setup, as other factors are often the bottleneck.

Two downside of the HC2 are 1) its processor is ARMv7 i.e. 32-bit, which is the biggest factor limiting it’s longevity as many distros are dropping 32-bit support, and 2) it has no wake-on-LAN support, which is a bit of a bummer. The next purchase will surely address those points. The rapid evolution and increased selection of SBCs and micro computers opens up lots of interesting possibilities.

1 Thank

Interesting project, sb. I think I will stay with my Synology DS218+ for the foreseeable future. It does what we ask of it. It has been a reliable device for me and every so often I make separate backups to an external stand alone 3.5" HDD. Our son runs a similar setup and we try to keep copies of each others daya at each others homes.

1 Thank

I was all into NASes and then said screw it and just went usb, as I only have 1 drive active/powered at most, and treat them as WORM drives (write once, read many). It’s only when offlining stuff, or wanting to retrieve something, that I power up any of the drives.

I’m not terribly concerned with speed, just storage/density, so SMR is fine, at least for me. They’re more… archival.

What I’d like to do is just get 2-3 Uberdrives, copy what I got on “smaller” drives, basically just migrating everything to the big’uns, if only as safe-copies. Just copying 2/4/8TB at a time would take weeks, probably.

I’m somewhat interesting in have my own NAS.
If I were to have one, it would be really simple and probably just need one hard drive, and I would use it to share movies with my family. :slightly_smiling_face:

Until recently, my “NAS” was a raspberry pi 4 with a 4TB HDD attached.

I’ve since moved to running Unraid on a box assembled from a variety of “hand-me-down” parts - friends, family, and my own old PC as donors.

This is definitely a solid option for functionality and the price is right. I just didn’t want to deal with the additional noise, size, and power consumption.

2 Thanks

Yeah. You can have hot spares, and on some controllers you can add a second parity set, but in general, it’s 2 failure tolerant if there’s a single parity block. The issue is that with large arrays, then you end up with rebuild times in the weeks, and weeks of intensive 100% I/O on a drive that’s ageing out (especially desktop grade) makes additional drive failures way more likely. With RAID10, the default is a 2 wide mirror (i.e. always survives a single failure, can then also survive a second failure on the same striped set, but not a failure on the other mirror) but again with some controllers it is possible to create it with 3 or more mirrors. Also, as soon as there is even a single failure then RAID6 has massive performance degradation due to the need to read from every array member at once for every single read.

OTOH, once a volume starts to get larger than 20-30TB, I would never use anything but ZFS anyway, just because the failure modes start to get very complex, and both the chance of additional failures during a rebuild increases, and also the damage done by an array failure also increases. >30TB I would only trust ZFS, and then only with RAIDZ2, RAIDZ3, or mirror vdevs.

1 Thank