USB 3.0 DAS Recommendation

For years now I have been adding external Seagate HDDs to my Vero 4K+.
My local Costco would put the 8TB drives on sale from time to time for $120 and if I needed more room I’d pick one up.
These drives have built in USB 3.0 hubs so I have been adding them as I go in a daisy chain.
Sounds super shady but it has worked great for me for years.
Recently I acquired a bunch of internal drives for free so it was time to consider a new option.
After much research I decided to get this enclosure:

https://www.amazon.com/Syba-SY-ENC50104-SATA-Non-RAID-Enclosure/dp/B07MD2LNYX

I shucked all my drives and just slid them all in to this case and everything just worked.
No need to format, rescan the library, etc.
It’s like the Vero didn’t even know that anything changed.
Everything is now connected to my Vero V on the USB 3.0 port.
Now there is just one USB cable and one power cord where before I had 8 wall warts for power and 8 USB cables daisy-chaining the drives together.

Look at all those wall wart plugs:

All my external shells with the HDDs shucked out of them:

And the new setup:

I don’t usually have my Vero there and that will definitely not be the home for the HDD enclosure, I just have it like that for now whole I’m setting things up and doing various test.

So far I am very happy with the set up.
It’s hard to find good reviews of this product online.
I had many questions I wanted answers for before I made the purchase but most reviews didn’t address my concerns.
So if you are looking for something like this for your setup and you have any questions you want answered before you make a purchase, just shoot me a question here in this thread and I will do my best to give you an answer.

Worth noting, the Vero 4k+ and Vero V have no problem getting SMART info from the attached drives using smartctl.

A simple SSH command of sudo smartctl -d sat,12 -a /dev/sda will give you info and you can run test like sudo smartctl -d sat,12 -t short /dev/sda.

That’s really interesting since you can get rid of multiple enclosures with their own plugs, etc.
Unfortunately here in Italy this big enclosure is available at 555 Euros from Amazon.it which is definitely too expensive.

Looking at https://www.amazon.it/Vassoio-Swappabe-Esterno-Enclosure-SY-ENC50119/dp/B07MD2LNYX but I don’t see it in stock with a price.

Keep an eye on it, it may come back in stock and sold by Amazon at a reasonable price.

I know here in the US it was very hard to come by in the 2020~2022 era and when it did pop up it was a third party seller trying to get inflated prices.

Once Amazon got it back in stock it started showing up again from $200~$230.

Syba sells products under different brands (like IO Crest).
Would be cool if Sam could get them at cost and sell them through the OSMC store with it branded OSMC for a little profit or maybe try a group buy.
I know NAS is popular but people under estimate the DAS crowd.

1 Like

What do simultaneous transfers look like on that thing since it only has a single USB connection? I’m assuming it wouldn’t cope well with someone setting up a drive pool on it.

image

All my drives are setup individually.
The source is coming from my laptop SSD and going to the Vero V over my gigabit network.
The target drives are identical 8TB drives formatted as ext4 on the USB 3.0.
So it looks like the 100~110 MBs transfer over the network is the real bottle neck here.
If I take the box and plug it directly into my laptop I bet it would be much faster, but I have yet to test that.
It does do UASP.

I did consider drive pooling at one point by exploring the Snapraid+Mergerfs but ultimately decided that was a lot of work.
To me, it just makes sense to keep things simple with the Vero.
Nothing simpler than doing a JBOD setup with a DAS!
The way I see it, the Vero’s general use case is a “write once read many” environment so JBOD makes sense.

I will be getting another on of these boxes Wednesday and I have like forty HDDs laying around so maybe I’ll throw a few 1TB or 2TB drives in it and play around with pooling just for fun but I don’t see myself using that as a final platform, just seems like too much maintenance for a video player.

I have read reviews of people using Windows Storage Spaces with these without issues, so pooling is definitely an option and extremely easy to setup on a Windows machine.
Doing pooling on the Vero is a bit trickier and requires much configuration for no real world benefit except maybe redundancy but I’m more of a fan of 1:1 backups and besides, if a drive fails unexpectedly, I just rerip from the original discs.
Time consuming and a little annoying but so is rebuilding arrays.
Everyone has there back up preferences.
Since this is just movies and TV episodes it’s low value data in my book.
If I lost all eight drives I’d me more upset about the cost of replacing them than the effort of reripping the files.

While I agree that it doesn’t make sense to do, you actually can do it pretty easy on a Vero (linux) either with btrfs, lvm or mdraid

Although it is possible to jump though the Storage Spaces GUI and configure a pool fast and easy that only works well with a mirrored pool. If someone wanted a parity pool the GUI method is going to perform very poorly as it will only setup a two column array and the formatting is done wrong so it is not aligned and ends up getting people write speeds that start really fast and then drop down to a few MB/s after Windows cache buffer is exhausted. I actually like Storages Spaces and have been running a pool and both my main file server and my backup destination server for some time now, but the learning curve to understand how to properly set a parity array up wasn’t exactly trivial. I didn’t really want to halve my useable space with a mirror so I went through the extra effort.

I would also note that Storage Space (I can’t speak of the Linux options as I’ve never played with any of them) is not the same thing as your old school RAID setup. First of all you are dealing with JBOD and the pool is sitting logically higher up on the drive. The drives can be seen by drive monitoring and maintenance programs just like if there was no pool sitting on them. They also don’t much care how they are connected so they could be moved between different computers, controllers, and even different interface types and they just work without even having to reconfigure anything. If you drop enough drives it breaks (think your spanning connection types and one goes down) it just stops and comes back up when you bring the missing drives back online. It is quite flexible and robust. Personally I’m not keen on the idea to wait for a drive to fail and then not have access to that data till I recover from backup or even worse recreate. I’d much rather just get a message that I’m running degraded and throw in a new drive to get my redundancy back (I’m not quite worried enough to run a hot spare yet).

Also do note that any RAID or software storage array is NOT A BACKUP and should never be referred to, or treated as such. A single copy of anything can disappear regardless of the robustness of what it is sitting on.

IMO if your sitting on that many drives I would use the smaller ones as offline backup locations, perhaps even sitting in a drawer as just bare drives with a sticky note on top. I don’t know that it really makes sense to keep anything on the smaller size spun up if you’re storing a large amount of data.

BINGO!
But I keep the BD folders on the small drives disconnected. So there are the original discs, then the BD folder rip on offline drives, then remux MKVs on live drives in the DAS.

Funny story…
I got some used drives that were 8TB but when I connected them to my WIn10 laptop they showed up as 24TB drives.
I formatted them, changed from MBR to GPT, etc.
No matter what I did they would report in Windows as having 24TBs of free space.
I asked my friend who works at WD and he said he’s never seen that before.
Smaller sizes yes, larger sizes never.
Started thinking maybe the previous owner was trying to run 24TB drive firmware on these 8TB drives, but 24TB drives didn’t even exist yet, only 22TB drives.
Did tons of research online and still no clue.
So plugged one back into my WIn10 machine and started checking it with every HDD tool Windows had.
I had never used Storage Spaces before but I launched it and it detected the drive as 8TB and being part of a 24TB pool.
Woah, now I figured it out.
The previous owner had these three 8TB drives as part of a Storage Spaces pool so each drive would report the entire pool size to Windows despite the other drives belonging to the pool not being plugged in.
So I figured out that I had to connect the drive, load Storage Spaces, use that software to remove the drive from the pool, then it would report as 8TB again in Windows.
Did that with all three drives and got them back to defaults.
Very strange.
So Storage Spaces marks the drives some how in a header or something as being part of a pool and what size that pool is.
Reformatting the drives does not remove that and neither does changing MBR or GPT.
If you didn’t know or didn’t have a Win10/11 machine you’d be stuck with these weird reporting drives.

Thats funny. It makes a special protected partition table IIRC that you can remove with diskpart. I think it isn’t all that dissimilar to when OEM’s put a hidden recovery partition on a drive. If you wanted to have a play around with Storage Spaces you don’t need to use real drives. Partition Manager can make and mount VHD’s and you can use those to build a storage pool for testing using a minimal amount of space. The performance will be terrible this way even if you put the VHD’s on an NVME, but it is handy as a learning tool and to discover how much useable space you get out of different configurations.

Here are some tests I performed on various writes to the DAS over USB 3.0.
The DAS is connected to my Win10 laptop and the source files are coming from the laptops 256GB NVMe .

Writing to two Seagate IronWolf 8TB drives at the same time:
1

Writing from one Seagate IronWolf 8TB to the other Seagate IronWolf 8TB:
2

Writing to three Seagate IronWolf 8TBs at the same time:

Writing to a cheap 512GB SSD I had lying around:
4

Writing to two Seagate IronWolf 8TB drives and one 512GB SSD at the same time:

Writing to one Seagate IronWolf 8TB drive:
6

The gigabit ethernet becomes the bottle neck writing to the DAS when it’s connected to the Vero V.
A gigabit network will usually run around 960 mbps with overhead.
That translates to 120 MBs and that’s about what I see (100~120 MBS) when writing to the DAS while connected to the Vero V on the USB 3.0 port.
So the gigabit ethernet will saturate before this DAS does.
Plugged directly into my PC I was able to get over 400 MBs in aggregate.
I did one test writing to 5 drives (3x IronWolf 8TB and 2x IronWolf 6TB) and they all ended up transferring around 90 MBs, that 450 MBs in aggregate.
And there I think I was hitting the bottle neck of my NVMe’s read speed because if I transfer from my 256GB NVMe to my 1TB SATA SSD I get 450 MBs after the cache clears.

image

USB 3.0 is said to have a theoretical max limit of 5 Gbit/s (500 MB/s) so this DAS chugs along pretty much at full tilt no problem.

The fastest a NAS could feed a Vero V is gigabit (120 MBs) due to the NIC being 10/100/1000.

A DAS can feed the Vero V over USB 3.0 about 4 to 5 times faster than a NAS.

So if your main goal is to feed your Vero V, you’re not going to have any real world limitations with a USB 3.0 DAS.
If you have the DAS on the Vero V shared then other players can access the library as well.
That’s what my children do with their Roku TVs.

I have to say I’m pretty pleased with this DAS, so I bought a second one since I have so many drives laying around.
My first DAS has 8x Seagate Barracuda 8TB drives in it that I shucked from Backup Plus HUBs.
I have already filled the second DAS up with 3x IronWolf 8TBs, 2x IronWolf 6TBs, 2x Constellation ES.3 4TBs, and 1x WD Red 8TB (EasyStore shuck).

1 Like

I would be too. Honestly, I would have expected less optimal behavior with simultaneous transfers but that looks as good as one could hope for.

I’ve built a number of NAS/DAS setups over the decades and like to think I’ve learned a few lessons. Here they are:

  • Never connect primary storage with USB connectors. They become lose and disappear from time to time. For backup storage, this isn’t THAT bad, but for primary storage, it will cause outages and corruption. Just. Don’t.
  • Prefer eSATA, SATA, or any connection type with screws - like infiniband These are built specifically for external connections and vibrations.
  • Drive cages are available that hold 4 3.5" disks, but fit into 3x5.25" slots in a standard mid/full tower case. Any cheap case can be used that has a little drive connector for eSATA-pm to SATA for normal devices.
  • I don’t know how anyone would connect a SATA/eSATA to a non-Intel/AMD motherboard, but I suppose it can be done.
  • I suppose USB 3.2 would provide 10Gbps connections in theory, but do any SBCs have USB 3.2 yet? That is typically visually implied by a “red” tab inside the USB slot. The male connector is still prone to vibration disconnects.

Some personal hardware:
I’ve used an 4 drive Addonics external array - it is very old now, but new it was $100.
I’ve used some cheap plastic Roswell hot-swap cages, ~ US$50. I have 1 in two of my systems. These are NAS + virtual machines, upgraded from a 65W Dual Pentium to a 65W Ryzen 5 a few years ago. It is amazing what a $300 upgrade can achive in a desktop. I also run Jellyfin on one of those systems, which provides the DLNA/NAS for Kodi on my playback devices around the house.
I’ve used a US$30 steel cage for 4 drives in one of my x86/64 systems. Added a quality LSI SAS 2-port HBA for the extra internal SATA connections. Performance is pretty great. 2 SAS connectors supports 8 SATA HDDs.

I’ll get some Amazon links (no referrer)

As for NAS/DAS file systems. If there will be Linux controlling access, then use a native Linux file system - that would be either ext4 or xfs, in general, for noobs to storage architecture. Consider using LVM or ZFS if you are intermediate level. Regardless, backup design needs to be part of any storage effort. Important files need 2 copies, minimal. Critical files need 3 copies with 1 of them stored in a different region away from natural disasters that would impact one location, but not the other.

For access over the network, use NFS for all Unix-based OSes. There’s just something about NFS that makes it better for not just streaming, but general use.

Use Samba only for MS-Windows client systems. Beware that MSFT has been drastically changing their implementation of CIFS in incompatible ways, so the Samba guys are always playing catch up. Depending on the different MS-Windows client versions, you can have Samba setup to provide the best performance and security with the newer protocol versions. Win7 used CIFS v2.1, so if you have Win10 or later, you should default to CIFS v3+. Alas, the way that MSFT systems “find” samba and other servers on the LAN has changed with Win10 and later. They use mDNS and ZeroConf/Bonjour now, not the old nmbd broadcast method. There are special services that need to be installed, configured and run on Linux for MS-Windows not to need an IP address to find them.

If you use LVM, be cautious about spanning across physical storage devices to create a single file system. Data loss when doing this is common.

RAID really doesn’t have any place in a home media environment. RAID solves 2 issues - high availability and, maybe, slightly better performance. It never replaces backups which can solve hundreds of issues, including a failed RAID setup. Backups are far more important than any RAID.

That’s probably more than enough to help and confuse.

Lot’s of great points.

I agree that a screwed in connection is better than a snap in connection, but the fear of a snap in connection coming loose is probably just fearmongering IMHO.
VGA and DVI had screws and they are dead standards now that have been replaced with HDMI and DP which snap in.
Granted an AV cable getting unplugged is no major threat where a USB connection getting unplugged in the middle of an operation could spell disaster.
But the Vero is not likely to be writing anything important, if anything at all, to a USB attached drive 99% of the time.
If the USB cable gets unplugged while watching something then no data is likely lost or corrupted.
So the only concern would be if you are writing to the disk from a remote location, but even then, these are not critical system files.
The transfer will fail, you plug it back in, delete the incomplete file from the DAS, then start the transfer anew.
I used to have 10 drives plugged into my Vero 4K+ so the odds of a USB failure would have been ten fold for me.
I have had zero issues with USB over all these years and I have six children, and my wife, in my home who could mess it up at any given time while I’m out at work.
So your statements are in fact true but the likelihood is so low and the consequences so trivial that I wouldn’t think it warrants any real fear.
I know there are many users on here using USB, so maybe someone else has a horror story they can share and I’ve just been lucky all this time.

I have ten drives now and still room for another five without getting creative or having to go external. I’m not running off a SBC though. Whisper quiet and rock solid.

I get by with 1 network drive, 1 offsite mirrored backup and 1 offline drive. Be pretty unlucky for that to fail. Wd drives I’ve found have the worst connections but I’ve never had a usb work lose.

As for op, it gives you transfer to the vero very fast, you say your kids can then stream but thats surely limited by gigabit ethernet, although still plenty for a few remuxes at once.

Every 2 months or so, one of my USB3 external storage “backup” drives disconnects from the system. The cable is still plugged in, but something happened electrically to cause it to disconnect, leading to failures.

I can assure you, this has happened on multiple systems over the last 10 yrs. Take it as a warning. Be happy that you’ve not experienced it.

I also had some SATA connectors in an external array come lose. The solution for that was to use 90° SATA cables, which solved the problem until the array died after 15 yrs of service.

BTW, all my equipment sits on a heavy rack with thick carpeting for vibration control. It isn’t being tapped or rocked. Actually, it is very uncommon to touch the rack or anything on it at all. About the only thing that might impact it would be an HVAC vent in the ceiling above it. The rack is steel bars, so airflow goes through all but the bottom shelf, about 6ft away from the top shelf.

My main NAS has … 9 drives active, 2 are USB and for backups only. The secondary has 7 drives, none are USB.
My media player devices only use the microSD storage. Media is streamed over the wired ethernet network.
Wifi is for guests and untrusted devices. All wifi connects outside the protected LAN is treated like raw internet traffic. I don’t trust any RF connectivity to be secure and there’s plenty of proof that is a reasonable decision.

Both locking SATA connectors and hot melt glue are a thing (as is alcohol when you need to remove the hot melt glue). To be honest though the only time I have ever seen problems with an internal SATA connector was with crappy interface cards and poor quality hot swap cages.

I’ve been doing tech for over 30 years and have used so many cables in so many different environments and use cases and I have never had a cable come loose on it’s own.
Older cables like IDE and SCSI were definitely more secure than modern cables like SATA but I have never experienced or even heard of cables disconnecting on their own before.
Hot gluing SATA cables?
Maybe if the computer is being shipped on the back of a truck I guess.
How do these cables just pop off by themselves in an environment with little to no vibration?
You guys have poltergeists in your homes!