[Howto] Newbies guide to filesystems

#1

Last updated 2019-08-25T22:00:00Z
Feel free to correct or add more to this “how to” post

Hey Peeps

So something i noticed as of late is an certain amount of users complain about speeds etc on USB disks so i thought i make a little how to post

First of remember to use a good recommended charger and usb cable or the cable that comes with your device like the Vero and often recommended to have a self powered usb drive or a powered usb hub.

Choosing your filesystem

Many people not familiar with Linux will format the drive in Windows and then use it for OSMC just remember NTFS isn’t native to Linux so it will always perform slower then intended. Also if you have an I/O intensive programs (like torrents) running it will put additional stress on the CPU usage also NTFS does not understand Linux permissions due to the lack of permissions anyone can change those files without having to worry about passwords.

NTFS performance can be improved using the big_writes option to the mount.

Here is is an chart the program used is FS-Mark and filesystem benchmark tool for more comprehensive charts (see link).

image

Here is a live test on a Vero device

Filesystem Time (secs) SyS Time (Secs) CPU (%) Memory Buffer
System User (M)
NTFS 1844 284 66 40 720
BTRFS 1609 137 16 0.1 6
EXT4 1596 299 9.6 0.2 8.8
EXFAT 1615 156 5.41 0.2 677

So lets run down the benefits of each filesystem available for OSMC

exfat

Well not native again to Linux but Microsoft made this to bridge the gap for Linux to it has more support then NTFS (IMHO) but is still has technical issues the thing is you might need some tools to do additional stuff on Linux like exfat-fuse exfat-utils

So how does it handle somewhat ok its basically fat64 and you can use the drive on both windows and Linux without issues as in terms of speed however there are plenty of reasons not to use this as a filesystem (see link) it does suffer from issues regarding transfer speeds when it comes to Samba.
exFAT is for SD media, not spinning disks or SSDs. It was created for 2 purposes.

  • Get a new patent, so MSFT could demand payment from every SD user and device that uses SD cards of any sort. Video cameras, PnS cameras, DSLRs, MP3 players, Phones, … that’s billions of devices all providing a small payment for every device.
  • Better support 32G and larger storage devices. It is possible to use FAT32 with larger storage devices, it just becomes extremely inefficient with allocations. MSFT changed the FAT32 code to prevent formatting partitions over 32GB with it, but it wasn’t always that way and Linux will format larger partitions, if you ask it.

ext4

This is the native filesystem for OSMC and the most recommended one since it works “out of the box” this filesystem can work under Windows with additional drivers such as Paragon Software suite (See link)
its less prone for corruption unless running with a bad charger or cable :wink: defragmentation is a non issue since the kernel has good management skills

btrfs

The hidden contender and my personal favorite for this file system you need to install some additional tools btrfs-tools btrfs-progs this filesystem has many unique features such as auto-defrag, scrubbing, balancing metadata etc. Its not recommended for newbies but for those that love to experiment hell give it a go i promise that you will have fun :stuck_out_tongue:

and again there are drivers for this under Windows such as Win-BTRFS (See link) and Paragon Software Suite (See link)

The cons of btrfs is that it can be a little bit heavier (2-5%) for lowend machines but it does pay off with features for the filesystem instead. BTRFS lies about storage. The du and df commands on a BTRFS cannot be trusted. The Copy-on-Write, CoW, nature of BTRFS has positives and negatives.

But how do i add more stuff to my HDD if i cant plug it into my computer

Well im glad you read this far, so first off go into the OSMC Store and download Samba this will share your drive on the network we are after all living in 2019 why move around a USB drive back and forth

just share that drive and from windows explorer type \\osmc (name may vary) to get to the shared drive to dump your stuff.

3 Likes
#2

One point to maybe highlight is not only the resulting speed that might be lower but also the extra strain it puts on the CPU. This might be not a major issue when playing videos of the disk but if people have other stuff running like torrents.

#3

Feel free to add it :slight_smile:

#4

I was fearful to interrupt the literature flow of this novel :slight_smile:

1 Like
#5

haha dont be i want more to be added to this so it may serve as a guide to newcomers that have no clue

#6

added a bit more about linux permissions thought that was valid :slight_smile:

#7

This webpage strangely indicates exfat to be slower than NTFS

#8

Yeah extfat isnt really a “good” filesystem but it was meant to bridge linux and windows however i have it included since it is an option so if you find more stuff feel free to add it to the exfat part

havent used exfat at all since it has no use for me.

#9

Ok, I think I will spend an hour on weekend to make a full comparison of ext4, btrfs, ntfs, exfat with a Pi3 and a Vero4k than we know well

1 Like
#10

Tnx :slight_smile: would be nice with real tests :smiley:

again btrfs is a totally different beast went tweaking fstab but for those test default is preferred

#11

Just a comment: At the moment exFAT in combination with Samba is not a good idea since you cannot transfer large files with that, means you’re getting a hard error while such transfers.

1 Like
#12

added it, feel free to add additional info the the OP next time :slight_smile:

#13

exFAT over Samba also does not allow for transfer status in Windows.

As for the speed I have read that file size is a major factor in the Linux NTFS speed hit. That chart speed tests 5k 1MB files and I think that might not be entirely valid for the use case here. I tested a SSD split in half with a NTFS and ext4 partitions on a Pi 3B+ on Samba r/w and they performed the same as the bottleneck was the USB.

#14

Ah great idea, will use that method when testing the 4 file systems.
And yes plan is to use two test one with a single 80GB file and one with 5k 1MB files.

#15

I don’t have a Vero.

FUSE will always be slower than a kernel-based file system driver like ext3/ext4. Always.

For SSD and spinning disks, choose ext4 if you don’t have a good reason to choose some other file system.

NTFS can be setup to support permissions, with some Windows-to-Unix userid mappings with a few mount options. Someone new to Linux probably shouldn’t attempt this. Seems that NTFS POSIX permissions only worked for a month, then stopped working. Recent troubleshooting hasn’t found a solution. About once a month I have to reformat the partition and reload the mapping files due to file corruption caused by the video recording device. chmod initially worked. chown initially worked. NTFS is still slower than ext4. The only reason I have any NTFS is because of a standalone video recording device. It supports only NTFS, nothing else. My use doesn’t connect it to the OSMC machines.

NTFS write performance can be improved using the big_writes option to the mount. https://www.tuxera.com/community/ntfs-3g-faq/#slow There are other tuning and troubleshooting of NTFS performance in that link from the company who provides the ntfs-3g driver.

# example mount command using a partition LABEL
LABEL1=250G
sudo mkdir -p /mnt/$LABEL1
sudo mount -o big_writes,async,uid=1000,gid=1000  LABEL=$LABEL1 /mnt/$LABEL1

or

# /etc/fstab example
/dev/sdZ1   /mnt/250G   ntfs   noatime,big_writes,async,uid=1000,gid=1000     0     0

On SBCs like a raspberry pi, the I/O bus is often shared and limited. This prevents both networking and disk I/O to be used at the same time with full bandwidth in each. While less important on newer SBCs, it is always a consideration on all computers, even $20K servers with multiple buses.

BTRFS lies about storage available and used. The du and df commands on a BTRFS cannot be trusted. The Copy-on-Write, CoW, nature of BTRFS has positives and negatives.

BTRFS can use compression, but most media files are already highly compressed, so using it doesn’t help there. It is helpful for almost all other types of files. There have been warnings against using BTRFS on virtual machine host storage.

Other methods exist to get files onto any Linux system using ssh-based tools. All networked computers have good ssh clients and sftp clients. On Windows, WinSCP or Filezilla can be used. On almost any Linux, the built-in file manager will support either the sftp:// or ssh:// URL to connect to remote storage. Nautilus, Caja, Nemo, Thunar, etc, all do. There is the CLI sftp and scp tools as well.
If ssh-keys have been setup and the public key transferred to the Linux ssh server, then password authentication isn’t required. Using keys for authentication makes the connection more convenient AND more secure. sftp with keys is considered secure enough to access and transfer files over the internet. sftp was designed to work just like plain FTP, just with secured authentication based on ssh. All data and login credentials are transferred inside a secure channel, using a single TCP port, which is firewall friendly. After the connection is made, normal FTP commands work - get, mget, put, mput, ls, dir, etc.

$ sftp username@remote-server

scp was designed to replace rcp. It is a drop-in replacement for rcp.

$ scp files-to-send username@remote-server:/target/directory/

Using a Linux file manager turns sftp into a drag-n-drop transfer. Open two file manager windows - 1 local and 1 on the remote system using the sftp://username@remoteserver/ URL. Drag-n-drop files between either side. Android, Windows, Linux, BSD, any OS with ssh will support this.

Rsync over a network will use ssh by default if it is there. Rsync has had this as default since around 2005-ish. rsync is preferred when there are many files or entire directory trees to be copied or mirrored. The real power for rsync comes in that only modified parts of files or new files get transferred.

$ rsync -avz /media/all-TV-shows   userid@remote-server:/path/to/target/

IMHO, samba is a hack to be avoided. Samba does not honor Unix permissions. Every samba share has to be configured for the file and directory permissions for all files and directories on that share. It is useful only when MS-Windows is the client. Unix-based OSes should use NFS instead. Both NFS and Samba/CIFS can be enabled concurrently, for the same directories.

NTFS is a better file system than FAT-whatever, including exFAT, primarily because it is a journalled file system like all the modern Unix file systems (ext3/4/btrfs/zfs/xfs/jfs …).

exFAT is for SD media, not spinning disks or SSDs. It was created for 2 purposes.

On linux, disk performance tests can be run using bonnie++. Definitely use external powered USB devices. Faster USB3 and USB3.1 storage is faster than USB2 storage even when connected to USB2 ports. USB2 bandwidth will usually be less than 22 MBps even when the disks support over 65MBps writes.

Linux has lots of file systems, each designed for specific purposes and at a specific time in history. Perhaps 30 files systems exist. Again, if SSD or spinning disks are used, stick with ext4.
ext4 bandwidth for a spinning HDD on a computer via any SATA connection can be 125MBps. For SSDs, 175MBps - 3+GBps are possible, depending on the interface used - SATA, USB3, or NVMe.

For larger media collections, using either ZFS or LVM+ext4 is often the best choice. Both of these solutions bring enterprise storage management capabilities. ZFS isn’t considered ready for use on the boot or OS partitions, but LVM+ext4 is and has been for almost a decade. For very large storage volumes, xfs is faster than ext4, but ext4 when partnered with LVM allows expansion and reduction of logical volumes, unlike xfs.

No Unix file system likes to be full. Performance degrades as the last few GB of storage is used up. Most Unix file systems automatically reserve 5% of the total storage for use only by the root user. When disks were 20GB in size, that made much more sense than today. Disks that are 2TB and larger will still reserve 5% of the total, so that could be 10GB that can’t be touched! The reserved blocks can be freed for use using the tune2fs program, at least for ext2/3/4 file systems. For data only partitions/LVs, setting the reserved amount to 0% probably won’t be too harmful. DO NOT do this on any OS partitions or LVs. It will end badly. Of course, this assumes that / is allocated a reasonable size, perhaps 25GB, not more.

IMHO.

3 Likes
#16

Been getting good results with btrfs even though there is the df du issue but with some tweaking its a really nice filesystem with little overhead and major improvement.

as for Samba well i see it as a nessary evil since its needed some most of the community are windows users and are used to windows solutions.

#17

@Spammer456 Added some of your notes into OP so tnx for the contribution

@fzinken did you get any tests data ?

#18

Well still working on different scenarios, in between looked at bonnie++ but didn’t consider it real live.
So far only did a write test with DD with larger blocksize, message is basically NTFS is the slowest but besides slow main topic is CPU impact.
Below a first view, open for comments. Next steps: Test with small block Size, Read Test and also Samba test.

Command Vero
Time (secs) SyS Time (Secs) CPU (%) Memory Buffer
System User (M)
time sh -c “dd if=/dev/zero of=/mnt/ntfs_test/test.tmp bs=32k count=2000000 && sync” 1844 284 66 40 720
time sh -c “dd if=/dev/zero of=/mnt/btrfs_test/test.tmp bs=32k count=2000000 && sync” 1609 137 16 0.1 6
time sh -c “dd if=/dev/zero of=/mnt/ext4_test/test.tmp bs=32k count=2000000 && sync” 1596 299 9.6 0.2 8.8
time sh -c “dd if=/dev/zero of=/mnt/exfat_test/test.tmp bs=32k count=2000000 && sync” 1615 156 5.41 0.2 677
#19

neat ill add that to a nice table :smiley:

#20

Yeah let’s see how we can do it nicely considering that I want to do it for Vero and Pi3

1 Like