NAS advice

I purchased a Z83 mini-PC and installed DietPi, set up an FTP server and attached 2 x 8TB USB 3.0 HDDs. I use one of the HDDs as a backup, using DietPi-Sync. Works perfect for me and the FTP server is blazingly fast. I can easily stream full Blu-ray rips over powerline ethernet anywhere in the house.

Z83: https://goo.gl/sg7xEK

8TB HDD: https://goo.gl/6p5ruF

D-Link Powerline: Amazon.com

DietPi X86_64: http://dietpi.com

We could actually ask the OMV folks if they are interested to work with OSMC on a plugin for a remote Kodi Server? Even though the functionality is already there, it would just be a template to configure the security of the accesses (so no one can delete the content through kodi for example).

2 Likes

I’m not sure that OSMC NAS was the idea 2018. Perhaps you realise before some older ideas, like

  • OSMC (certified) sono and speaker system.

For NAS we (customers) have a lot of ( working) choices today…

Michael

This would just be a matter to setup a DB with the right entries so one can check details on the hardware. I would be able to set that up quite fast, if one (you folks) provide me details on what to enter (so I can create a structure).

Problem with OSMC IMHO, is that Sam will not have the financial capability to buy hardware and certify it. There is just too much out there. I think this will be a community effort.

For money that’s right. But perhaps some relationships. For Community effort you are (very) right.
For myself , when I take a look to some forum threads, spend my money to buy and loose my time after Kodi or OSMC updates, it’s no. What I need it’s something tested before update, which need perhaps some debug. OK that’s life.
Perhaps the OSMC community it’s big enough, to interests some hardware vendor, to work with VERO4 /OSMC (and store).
Michael

Never had a single issue with my Synology NAS’ in ten years. It has a neatly integrated DynDNS service so I can access it everywhere, it takes care of backing up into the cloud, I can set up NFS or SMB easily, I relegate download duties to it.

I’m coming from the same place as other people here. For my work as a developer I have a Linux setup but I really don’t have to fiddle with that more than necessary. I can really appreciate how easy their frontend is to use plus I can SSH into it if I want to. But all I ever needed that for is a script which moves downloaded files automatically as I need to.

Very honestly, I think Sam & Co. are doing an amazing Job.
Almost every time someone mentioned his “receiver” does not work or similar, they figured something out to make it work. And the main reason it usually does not work is because the Manufacturers do not stick to standards… (or the users did not read any doc :slight_smile: ).
I had some devices (ATV and some other mediaplayers) I have retired because some do not work with my receiver, some not with the WiFi (tkip phased out), some not with my TV screen. And - guess what - I never got support for these devices.
I am pretty sure that here, in case I bought a new TV or whatever device, and it does not work with my Vero - I’d give Sam access to my Vero remotely and he would figure something out.

That is what I call “service”… Something the others vendors have forgotten (hiding behind surtaxed phone numbers or not answering at all).

And please don’t forget the many other devices OSCM supports, and receives security updates and even new versions of OSMC + Kodi. I admit that sometimes it becomes hard to test out all bugs/effects. But that is a part i am willing to play.

If you don’t want to be part of that experience - configure your OSMC to not install updates automatically, when a new version comes out - check out the forums if there are issues, and if there are none - apply the updates. :slight_smile:

3 Likes

+1 for HP Microserver and Openmediavault. I’ve been running a HPN54L for years with OMV. These are rock solid 'cause they’re proper servers not just a bunch of disks in a box. Currently I have 4 x 3TB drives plus a small SSD for the OS. I use Snapraid for security with one disk dedicated to parity. I feel that is much easier and safer than using ‘proper’ RAID or fancy filesystems.

If you need a volunteer, my hand is up :wink:

I’ve been thinking about building a proper NAS, reading reviews, articles etc. There’s a friend who has a N54L running since 2011, and he’s absolutely happy with it.
Now all microservers from around that time are out of date and impossible to buy new online. HP’s ProLiant Gen8 was a favourite back then as well, but its successor Gen10 is supposed to be a step down in every category. Any recommendations on what to use for OMV with 20 – 40 TB today?

I think with the cashback offer that HP offered, they probably didn’t make money on the base model.

Sam

Whoa, didn’t even know about that. :open_mouth: With 200 € cashback, this was ridiculous. Even without, the thing hit the sweet spot in price/performance between small 1/2 bay NAS like those from Synology and actual workstations.

The sweet spot for drive price/TB is with 6TB or 8TB drives. With drives that large, you should run RAID-6 to allow safety during a rebuild. Best performance comes from using a data drive count of a power of two, so that means 6 total drives (4 data plus 2 parity in RAID-6), giving you 24 or 32TB of storage. For that, you’ll need something with 8 bays.
Even with 10TB drives, you’d be hard-pressed to meet your storage requirement and have safe redundancy on a 4-bay system.

On the other hand, if you only store BluRay ISOs or re-muxes from discs you own, you could skip the redundancy and live with the fact that a failure would mean re-ripping the original discs where the copies were on the failed drive. Without a re-encode, it would go as fast as you can read them. You could then use a 4-bay to reach your storage requirement.

1 Like

I’ve dealt with storage a bunch over the decades. Each solution has a place, but the vast majority are either too expensive or make poor security decisions, IMHO.

Every storage device sold for home use that also connects to the internet has had security failures. ALL OF THEM. They allow 3rd parties access to the data. If I wanted that, I’d use something like dropbox or S3. A properly configured storage server with ssh can provide scp, sftp, rsync, and sshfs. These are all pretty flexible for remote storage access. If ssh-keys are used, never passwords, then they can be more secure AND more convenient.

For home storage systems, we can pay someone $800 to some storage vendor or build our own for less than $200. If we build our own, we can decide with commodity HDDs fit our needs and not be stuck picking from a list of supported drive models from supported vendors. I have a little over 20TB of storage connected to my home NAS. It uses a $50 CPU + a $50 motherboard and $26 for 8GB of DDR3 RAM. This is more than fast enough for running Plex MS and supporting transcoding of media for different devices around the house, including a raspberry-pi v2. The case has 6 disks and I use an cheap external eSATA array ($99) and an 8TB USB3 for rsync backups of the media. I’m not a fan of RAID for media files. Backups are more important than RAID. The OS disk failed on my NAS a few months ago. Because I am religious about backup, it was a minor inconvenience only. ZERO data loss.

Of course, I think this is all pretty easy due to my unix background. I wouldn’t know where to begin to talk a non-Unix person through this stuff who just wants to point-n-click solution. It is either time or money. The $800 NAS devices vs the $200 home-built really show the time or money trade-off. I suppose that something based on FreeNAS (or a fork) that begins with 6 HDDs to make ZFS happy would be the best idea. That’s a pretty huge buy-in to start.

I would never use any of the available ARM systems for a NAS, but I want my NAS to be a Plex Server with transcoding of multiple streams concurrently. Without the transcoding requirement, perhaps something with USB3/eSATA and GigE networking like an APU2 or ODroid device would make sense? I saw a NAS build using an odroid with 2 HDDs a few months ago somewhere that was interesting. I found the total build price to be less cost effective when compared to my home-build $130 solution.

Oh … and I run Ubuntu Server 16.04 on the NAS box and use LVM + ext4 file systems. By keeping it standard, data issues are minimized. Some of the expensive NAS devices use slightly proprietary storage methods, which make data recovery after any failure a little harder or next to impossible. LVM can be a great thing or the devil, depending on your setup decisions. I do not use it to span file systems/LVs across different physical disks. The media center library management handles that already. Basically, I mount primary storage under /D/M1, /D/M2, /D/M3 and /D/Music and /D/T1, T2, T3 … you get the idea. Then I have external devices for backups under … wait for it … /B/M1, M2, M3 and /B/T1, T2,T3 … when I add another primary disk, it will be M4 and the backup storage for that will be /B/M4. Keeping it simple.

Regardless, RAID never replaces backups.

1 Like

RAID is useful for the time when the disk with the latest episode of your spouse/child’s favorite TV show fails when you are out of town. The video is still available with RAID, but even with a good backup, you need to be there to physically replace the failed disk and copy back the data.

With RAID and a hot spare, you’ll know about the failure via e-mail, but won’t need to do anything until you get back home.

Honestly, if you are talking about 3-5 data drives, buying an extra one and running RAID-5 or RAIDZ really doesn’t cost all that much extra, and adds some big bonus features. For example, ZFS will let you know if bits have changed when they shouldn’t have, thus allowing you to restore from backup instead of backing up a corrupt file.

In February 2015 I was unfortunate enough to suffer three drive failures within 12 hours. This was using Seagate 3TB disks. No important data was lost; but a lot of TV was. One would suspect the RAID controller in such a circumstance, but unforunately, the issue was indeed caused by the drives.

I had made significant effort to stagger the purchase of the drives and source them from different retailers to ensure they were not in the same batch.

The Seagate drives I had used turned up to have a ridiculous failure rate (something like 40% after a year) according to BackBlaze’s annual stats.

If you want good data redundancy, something like Ceph is a good option. Even then, data integrity is not guaranteed. It has fared up well over time though, unless you’re OVH (but that’s what happens when you don’t update software).

1 Like

RAID is useful for the time when the disk with the latest episode of your spouse/child’s favorite TV show fails when you are out of town.

I would just ssh in and change where the library got the files to the backup disk(s). There are issues with RAID that can (and DO) happen which can only be corrected/restored by using backups.
RAID solves 3 issues.
Versioned backups solve at least 1,001 issues, including RAID failures.
I’ve seen RAID5 rebuilds fail. On disks over 2TB, RAID5 is NOT recommended.

ZFS is a good thing for a number of reasons. I’ve used it professionally on Solaris, but not on Linux. During my last NAS build, ZFS on Linux had a bad reputation for needing 1G of RAM for every 2TB of storage. I wasn’t willing to make that investment. Turns out that was only if the more advanced ZFS features were enabled, like dedup.

To deal with bit rot, I use parity files. Got into that habit when I was using optical storage at work in the 1990s. In theory, optical storage uses the same data corruption checks to deal with read failures, but in practice, I’ve been able to recover data from optical disks thanks to the par2 files. ZFS would certainly be more efficient and I would trust it as non-boot storage on Linux today.

I’ve been using sheepdog for multiple redundancy needs on virtual machines. Lighter than other block-based redundancy techniques, but it is designed only for VMs.

Don’t you also need ECC RAM (not that expensive these days)?
For Ceph, deep scrub seems to mitigate bit-rot concerns.

ECC RAM is highly recommended for ZFS deployments.

I use weekly scrubbing on my mdadm RAID storage … which isn’t used for media files. To me, media just isn’t THAT important when I already have a backup and par2 files too.
/usr/share/mdadm/checkarray --all --idle in the root crontab to run weekly handles it. I’ve been on RAID1 storage since replacing 320G disks with 2TB disks about 10 yrs ago.
Of course, my setup isn’t perfect even for my needs. :wink:

Seagate … oh how you have failed me. Until Seagate started making 750G disks, they were a premiere HDD vendor. I have some320G seagates that I still use here in docking stations for quick sneaker-net or loaner drives. They where running in RAID5 for 8 yrs, 24/7/365 without any issues. Then something happened inside seagate. Every 5 yrs or so, I get tricked into buying consumer seagate HDDs and I’ve been burned with each of those purchases. Saving $20 just isn’t worth the hassle of dealing with a disk that fails before the 1 yr warranty is up or after 14 months. I’ve had both. My credit card would actually replace the 14month failure, but I decided another seagate wasn’t something I wanted. Thinking about it now, I should have gotten the replacement and sold it. I never intend to purchase another Seagate disk again.

The Backblaze info is good, but it still doesn’t really help with buying new drives, as they have shown that any manufacturer can have a “slump”. One thing they believe is that “enterprise” drives aren’t worth the money, but I have found that drives with 5-year warranties tend to survive much better than even the 3-year drives.

We had retired a storage system where I used to work, and it had a Backblaze-style chassis with 50 drives per 4U. The system had been used for 3 years, so the drives still had 2 years of warranty (WD RE disks). We pulled about 100 disks out of the chassis and used them for various purposes, and it’s now nearly 9 years after those drives were bought, and only about 10 of the pulled disks have failed and most of those drives run 24/7…power on time is between 7 and 8 years for a lot of them.

But, almost all of those drives are in systems with real cooling, keeping the temperature under 45ºC. That’s one thing many home-built NAS devices lack, and even the pre-built don’t always do so well.

1 Like