Perhaps you’ve just found a reason to use the -m option, but I’m no expert.
If I understand correctly the -m 0
command would create more free space as it removes a percentage of disk space created as a reserve, as you don’t need any space for anything other than storage that command would definitely be of use in my case.
But what I mean has to do with consciously not using a percentage of free storage space to keep reading and writing capabilities of a drive intact.
I read ext4 acts totally different from exFat or NTFS and maybe an ext4 drive does not suffer from deteriorating performance when the drive is nearly full.
My experience of transferring a 10GB file to a 4TB drive that has 80GB free of space is much slower than when I have 200GB free space left.
As I say, I’m no expert, but I suspect the slow performance has something to do with HFS’s on-the-fly defragmentation. Since ext4 has something similar, it may suffer the same way.
Then I don’t know if the space ‘reserved for root’ is actually used for everyday file operations. The only way is to do some tests with different values for -m and different amounts of free space.
From what I understand, file systems such as HFS+ and ext4 are less prone to fragmentation than some others but there is no way for any file system to completely eliminate it under normal real world use once the drive starts to fill up. When you delete a file it leaves a hole a specific size. When you have to put new information there it is unlikely to be the exact same size. At some point to use the last remaining bit of space, particularly with a big file, it will have to put it in a bunch of these holes, and things start to slow down.
Assuming that reserve space on ext4 is just a logical reserve of a set size, and not a specific location, then reserving it would only serve the same function as stop adding files to the drive when it got to that same place. I can’t see a practical use to this with media storage as fragmentation should not hurt read speed enough to matter for playback, and you would be giving up storage space.
The best way to deal with this kind of situation is have a bit of forethought to how you setup and use your drive in the first place. For example if you have a partition that has only had large media files then it is unlikely to ever become an issue. If you decide to mix that with something like a torrent download location which is going to be adding and removing lots of small files making a bunch of small holes along the way then you are much more likely to have a larger impact. In this case the person would have been well served to make a separate partition for their torrents (or better yet use a cheap SSD for torrent use).
Yeah, sounds like a performance hit from fragmentation.
I’ve maxed my drive out, deleted stuff to make room, transferred new stuff on, and wow it really kills performance.
When you get that low on space, let’s say less than 10%, you really should do a defrag before filling up the remainder.
I did that and went from 35MBs speeds to well over 100MBs.
Set it to defrag one night before you go to bed.
@Kontrarian Are you talking about ext4 and e4defrag which is part of the e2fsprogs package or just about another file system like HFS+ or NTFS?
Sorry, I was speaking to bezulsqy.
I believe he’s using HFS right?
HFS on the fly defragmentation will be killed when space gets low.
Worth the read, check it out.
This paper is almost a decade old, so things have likely progressed.
I was successful in formatting an new 4TB HD to ext4 following @Kontrarian instructions. I used largefile4
and -m 0
Last login: Tue Sep 10 15:48:19 2019 from 192.168.178.13
osmc@osmc:~$ df
Filesystem 1K-blocks Used Available Use% Mounted on
devtmpfs 774688 0 774688 0% /dev
tmpfs 899236 8688 890548 1% /run
/dev/vero-nand/root 14499760 2253176 11486984 17% /
tmpfs 899236 0 899236 0% /dev/shm
tmpfs 5120 0 5120 0% /run/lock
tmpfs 899236 0 899236 0% /sys/fs/cgroup
/dev/sda1 3906982908 371488 3906611420 1% /media/Elements SE
tmpfs 179844 0 179844 0% /run/user/1000
osmc@osmc:~$ sudo umount /dev/sda1
osmc@osmc:~$ sudo dd if=/dev/zero of=dev/sda1 bs=1M count=64
dd: failed to open 'dev/sda1': No such file or directory
osmc@osmc:~$ sudo dd if=/dev/zero of=/dev/sda1 bs=1M count=64
64+0 records in
64+0 records out
67108864 bytes (67 MB, 64 MiB) copied, 2.54129 s, 26.4 MB/s
osmc@osmc:~$ sudo fdisk /dev/sda
Welcome to fdisk (util-linux 2.29.2).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
Command (m for help): d
Selected partition 1
Partition 1 has been deleted.
Command (m for help): F
Unpartitioned space /dev/sda: 3.7 TiB, 4000751533568 bytes, 7813967839 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
Start End Sectors Size
2048 7813969886 7813967839 3.7T
Command (m for help): n
Partition number (1-128, default 1): 1
First sector (34-7813969886, default 2048): 2048
Last sector, +sectors or +size{K,M,G,T,P} (2048-7813969886, default 7813969886): 7813969886
Created a new partition 1 of type 'Linux filesystem' and of size 3.7 TiB.
Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.
osmc@osmc:~$ mkfs.ext4 /dev/sda1 -T largefile4 -m 0 -L First
mke2fs 1.43.4 (31-Jan-2017)
Suggestion: Use Linux kernel >= 3.18 for improved stability of the metadata and journal checksum features.
Creating filesystem with 976745979 4k blocks and 953856 inodes
Filesystem UUID: d678d447-0877-49ce-b707-7eb32d3d6e12
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
102400000, 214990848, 512000000, 550731776, 644972544
Allocating group tables: done
Writing inode tables: done
Creating journal (262144 blocks): done
Writing superblocks and filesystem accounting information: done
osmc@osmc:~$ reboot
osmc@osmc:~$ Connection to 192.168.178.16 closed by remote host.
Connection to 192.168.178.16 closed.
Ben-2:~ bezulsqy$ ssh osmc@192.168.178.16
osmc@192.168.178.16's password:
Linux osmc 3.14.29-152-osmc #1 SMP osmc-ccachefix aarch64
The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.
Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Fri Sep 13 13:14:31 2019 from 192.168.178.13
osmc@osmc:~$ df
Filesystem 1K-blocks Used Available Use% Mounted on
devtmpfs 774688 0 774688 0% /dev
tmpfs 899236 8688 890548 1% /run
/dev/vero-nand/root 14499760 2253192 11486968 17% /
tmpfs 899236 0 899236 0% /dev/shm
tmpfs 5120 0 5120 0% /run/lock
tmpfs 899236 0 899236 0% /sys/fs/cgroup
/dev/sda1 3905417316 90140 3905310792 1% /media/First
tmpfs 179844 0 179844 0% /run/user/1000
I am transferring a file from an exFat HD attached to my MacBook via SMB to the ext4 HD attached to my Vero.
It took 22 minutes to transfer 1 file with a size of 18,3GB.
Transferred a 13GB file from SSD in MacBook to Vero in a bit over 15 minutes. For me there is no difference in speed between transferring files using SMB to an exFat or ext4 drive attached to the Vero. Later today I will see if reading the ext4 when watching a movie will work when I am writing to it (downloading using Transmission)
Vero V update:
From my test with the new Vero V using a USB 3.0 external HDD on the Vero V’s USB 3.0 port I get 26 MBs writing to NTFS and 112 MBs writing to ext4.
Huge win for ext4 on the Vero V!