Historically, SBC throughput quickly became an issue. They had a single, shared, USB bus capped at 480Mbps, in theory. In reality, it was much less. Add in the network connection also shares that bus and we had a throughput nightmare.
The main reasons to go with an SBC solution are, tiny and low power. When we add storage, we’ve got cooling and power to consider, which defeats the reasons for SBCs. In addition, spinning disks make noise and have failures that noobs usually have difficulty resolving. Having noise in the viewing room is bad.
I’m active on Linux forums and we see people with very little technical skill asking for help from their commercial RAID boxes all-the-time. They come with all sorts of odd setups - like btrfs on an mdadm SW-RAID5 setup. Why? That makes little sense to me. Commercial storage setups often don’t use standard techniques for the storage design. BTRFS has so many caveats for safe use, I just don’t see the point. But I’m an old LVM2+ext4 guy. Keep storage simple, clean, but enterprise ready and flexible. A case can be made to use ZFS, but changing the way storage gets used can be really complex as more or different sized disks get added to a ZFS setup.
Then we see people using $35 CPUs from Intel trying to run 50 different “add-ons” provided by the NAS maker. Bad news. All those addons just make for a tightly coupled, often unpatched, security nightmare of a solution. Have you looked at the cost of those “cheap” 4-8 disk arrays? They want $400-$1200!
Using a $50 dual-core Pentium CPU (65W peak) + $50 MB w/intel iGPU, I built a “NAS” machine from parts for $126 (RAM was $26 for 8G at the time) that supports 6 SATA disks. Old PSU, old case, just added storage and connect the output to my KVM. It started with a single 4TB Hitachi HDD. That was around 2015. Now it has 10 HDDs connected of 4TB and 8TB sizes. Because it is mostly for media (video, audio, images, books), I don’t do RAID on it. RAID is for HA, nothing else. I do mirror (via rsync) the SATA connected storage to USB connected storage a few times a week for redundancy, but delayed so if there’s any user errors, recovery isn’t hard. USB storage protocols aren’t very robust compared to SATA, SAS, eSATA. I’d never use USB connected storage for RAID-anything, but people do.
RAID is only worth the hassles where HA is required. On other systems, I have 2 RAID1 arrays where a failed disk would be really bad for the running virtual machines, but not for media.
Anyway - $126 for a flexible NAS. Currently, that machine is running Ubuntu Server 16.04. It also runs plex media server primarily because plex can transcode on-the-fly for devices which cannot handle the codecs in the stored files. That $50, cheap, Pentium CPU from 5 yrs ago handles that easily. No ARM-based CPU can do it to my knowledge. It also runs Calibre and provides NFS to the network systems. It does OTA TV recording from 6 network tuners. Almost forgot about that. It also has the Nextcloud instance and Wallabag instance running inside a virtual machine (KVM+libvirt) … and a backup DNS server for the LAN running in an LXD container. What it doesn’t to is video playback. All the stuff running on it are services, headless, without any GPU necessary.
A rpi-v4 solved some of the throughput issues, but we are still left with a system NOT designed to handle lots of I/O in the same way that x86-64 systems are. I have seen 2 disk NAS systems built using ARM SBCs. But after the build, the youtube guys gave it away. They kept their 8 disk Intel-based NAS because of throughput and networking performance. Let me look for that NAS build. Can’t find it, but did find a 1 disk, 2.5inch NAS using Odroid HC1 as the SBC. https://shop.category5.tv/?product=odroid-hc1-soc-nas has a video w/ parts list. The HC1 has a SATA connector for that 2.5inch HDD to slide into and GigE NIC. The specs claim 110 MB/sec writes and over 900 Mbps networking. That SOC is US$72.
Ah - found the 2 disk thing. It was using a “2x M.2 RAID Enclosure USB 3.1” - Basically, what someone doing 8K video editing wants. For playing videos, we don’t need that expensive storage. A slow, cheap, 4TB-12TB HDD is fine for our needs, provided it has external power.