Cannot create directory: No space left on device

Hey everyone.

I’m puzzled. I’m running OSMC on my RPi 2, with all the media on an external USB drive plugged into the Pi. The drive is formatted as ext4.

There’s still like 15+ GB of free space on it, yet when I try to run mkdir it yells at me:

cannot create directory “name_of_directory”: No space left on device

How come?

Also, when I run df it gives me this:

Filesystem     1K-blocks      Used Available Use% Mounted on
devtmpfs          370520         0    370520   0% /dev
tmpfs             375512      5100    370412   2% /run
/dev/mmcblk0p2   7068376   2195756   4490516  33% /
tmpfs             375512         0    375512   0% /dev/shm
tmpfs               5120         4      5116   1% /run/lock
tmpfs             375512         0    375512   0% /sys/fs/cgroup
/dev/mmcblk0p1    244988     24948    220040  11% /boot
/dev/sda1      307534284 289950488   1938904 100% /media/osmcmedia
tmpfs              75104         0     75104   0% /run/user/1000

How can /media/osmcmedia be 100% under Use%?

This is the output from df -i:

Filesystem       Inodes IUsed    IFree IUse% Mounted on
devtmpfs          92630   357    92273    1% /dev
tmpfs             93878   415    93463    1% /run
/dev/mmcblk0p2   457856 68237   389619   15% /
tmpfs             93878     1    93877    1% /dev/shm
tmpfs             93878     4    93874    1% /run/lock
tmpfs             93878    10    93868    1% /sys/fs/cgroup
/dev/mmcblk0p1        0     0        0     - /boot
/dev/sda1      19537920  1993 19535927    1% /media/osmcmedia
tmpfs             93878     4    93874    1% /run/user/1000

I read somewhere that it’s because I’m running of iNodes, but even if that’s true, I have no idea how to fix that. Plus, df -i strangely reports 1% for /media/osmcmedia - shouldn’t it report 100% if there were no iNodes left on /media/osmcmedia? And, df reports a 100% use when there are 15+ GB left on the drive, which is strange, too.

Any help would be appreciated.

Hi,

If you attach the usb drive to another device, are you able to write to it then?

Thanks Tom.

By default the system holds 5% back for various reasons.

Actually, it is not “various” reasons.
On the system disks usually the OS holds back 5% to prevent normal users from filling up the disk. Root processes/user can still write down to it.
The main reason I however would leave 10% of free disk space, is that with less, the defragmentation algorythm does not work efficiently anymore.

You have less than 2G free which is less than 1% of 300G therefore assume it shows 100%

Ok, I messed around some more.

If I plug the drive directly into my computer via USB, I can create folders and copy files to it just fine.

Also, like I said, system reports that I have ~18 GB of available space on that disk.

Here’s df -h result when ran on my computer:

Filesystem      Size   Used  Avail Capacity iused      ifree %iused  Mounted on
/dev/disk2     532Gi  177Gi  355Gi    34% 1468513 4293498766    0%   /
devfs          189Ki  189Ki    0Bi   100%     652          0  100%   /dev
map -hosts       0Bi    0Bi    0Bi   100%       0          0  100%   /net
map auto_home    0Bi    0Bi    0Bi   100%       0          0  100%   /home
/dev/disk1s4    47Gi   27Gi   20Gi    59%   72053   20490727    0%   /Volumes/Windows
/dev/disk1s3   620Mi  540Mi   80Mi    88%      66 4294967213    0%   /Volumes/Recovery HD
/dev/disk3s1   298Gi  281Gi   17Gi    95%    1991   19535929    0%   /Volumes/osmcmedia

This whole thing is very strange to me. It’s a 320 GB drive and that’s what my system is reporting. If I plug it into my Pi, it reports it’s a 294 GB drive (294G Size, 277G Used, 1.9G Avail), both in GUI and df -h.

My head is spinning at this point :).

From your first post:

The used figure is 289950488, not 227G. Of course, it confuses matters where you use df on one system and df -h on the other but the figures are consistent. What’s the OS of your computer?

I’m sorry, that was my mistake (typo): Pi of course reports it as 294G Size, 277G Used, 1.9G Avail.

I’m on a macOS Sierra.

I don’t use a Mac but I’d suspect it doesn’t impose the 5% reserved space. And on ext4 it’s way too much anyway. This is from the ext4 author: Reserved block count for Large Filesystem

No, it does not. That’s the problem. I assume ext4 does not impose this much, either.

Whatever the OS or filesystem used to format the disk, if you format a 320GB drive, you should not end up with a drive with under 300GB space available. I’m totally puzzled.

I tried formatting it:

  1. on my Mac
  2. on Pi itself, plugged in directly
  3. on Ubuntu

In all three scenarios the Pi does not see it as a 320GB drive, or even something close to 320GB.

I deleted some old movies just a moment ago, and here’s what the Pi reports from df -h now:

Filesystem      Size  Used Avail Use% Mounted on
devtmpfs        362M     0  362M   0% /dev
tmpfs           367M  5.0M  362M   2% /run
/dev/mmcblk0p2  6.8G  1.9G  4.6G  30% /
tmpfs           367M     0  367M   0% /dev/shm
tmpfs           5.0M  4.0K  5.0M   1% /run/lock
tmpfs           367M     0  367M   0% /sys/fs/cgroup
/dev/mmcblk0p1  240M   25M  215M  11% /boot
/dev/sda1       294G  269G  9.5G  97% /media/osmcmedia
tmpfs            74M     0   74M   0% /run/user/1000

Ok, so I supposedly have 9.5G of free space on the disk now, so mkdir has no problems creating a folder now.

However, the Pi reports Use% as 97%, which is bonkers. The math is all over the place.

Firstly, it’s a 320GB drive, and Pi reports it as a 294G drive.
Secondly, if it’s a 294G drive and there’s 9.5G left, that would mean there’s 284.5G in use - not 269G as Pi reports it.
Thirdly, 269G out of 294G is ~91.5% - not 97%, as Pi reports it.

Now if I plug it to my Mac, here’s the df -h output:

Filesystem      Size   Used  Avail Capacity iused      ifree %iused  Mounted on
/dev/disk2     532Gi  177Gi  355Gi    34% 1468805 4293498474    0%   /
devfs          189Ki  189Ki    0Bi   100%     652          0  100%   /dev
map -hosts       0Bi    0Bi    0Bi   100%       0          0  100%   /net
map auto_home    0Bi    0Bi    0Bi   100%       0          0  100%   /home
/dev/disk1s4    47Gi   27Gi   20Gi    59%   72053   20490727    0%   /Volumes/Windows
/dev/disk1s3   620Mi  540Mi   80Mi    88%      66 4294967213    0%   /Volumes/Recovery HD
/dev/disk3s1   298Gi  274Gi   24Gi    92%    1952   19535968    0%   /Volumes/osmcmedia

Now at least the Used% is somewhat accurate, at 92% ;D.

Am I crazy or missing something obvious here :slight_smile: ?

UPDATE: I should mention that the disk itself is in good condition. If I format it as HFS+ (native to Mac) or even NTFS or FAT, I get almost all of the 320GB, as expected, not under 300GB.

Make a;

~# sudo tune2fs -l /dev/sda1

it will show you how much is reserved.

If it is a data disk, you may want to run:

~# tune2fs -m 0 /dev/sda1

to disable the reserved blocks completely

You might find this educational. Gigabyte - Wikipedia There’s even a section marked “Consumer confusion”. :wink:

Edit:

/dev/sda1 294G 269G 9.5G 97% /media/osmcmedia

294 * 0.95 - 9.5 = 269.8

Here’s the output from sudo tune2fs -l /dev/sda1:

tune2fs 1.42.12 (29-Aug-2014)
Filesystem volume name:   osmcmedia
Last mounted on:          /media/osmcmedia
Filesystem UUID:          de6835a7-c0b0-4e2d-b2f6-f3ccdf30ebf8
Filesystem magic number:  0xEF53
Filesystem revision #:    1 (dynamic)
Filesystem features:      has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize
Filesystem flags:         unsigned_directory_hash 
Default mount options:    user_xattr acl
Filesystem state:         clean
Errors behavior:          Continue
Filesystem OS type:       Linux
Inode count:              19537920
Block count:              78142549
Reserved block count:     3907127
Free blocks:              6377869
Free inodes:              19535970
First block:              0
Block size:               4096
Fragment size:            4096
Reserved GDT blocks:      1005
Blocks per group:         32768
Fragments per group:      32768
Inodes per group:         8192
Inode blocks per group:   512
Flex block group size:    16
Filesystem created:       Thu Jun  1 19:02:54 2017
Last mount time:          Fri Jun  2 15:41:54 2017
Last write time:          Fri Jun  2 15:41:54 2017
Mount count:              14
Maximum mount count:      -1
Last checked:             Thu Jun  1 19:02:54 2017
Check interval:           0 (<none>)
Lifetime writes:          4914 MB
Reserved blocks uid:      0 (user root)
Reserved blocks gid:      0 (group root)
First inode:              11
Inode size:	          256
Required extra isize:     28
Desired extra isize:      28
Journal inode:            8
Default directory hash:   half_md4
Directory Hash Seed:      ddf22852-c061-41a7-9fb2-e417e789b272
Journal backup:           inode blocks

@dillthedog Ok, but regardless of how various systems report it, I do have plenty of free space, so should not run into Cannot create directory: No space left on device.

UPDATE: Or run into problems when copying a 4GB movie when having ~15GB of free space, and being told there’s not enough space on the disk to copy the file :slight_smile: ?

Can you create a directory now you’ve freed up some space?

Yes. This doesn’t explain all the other things I mentioned, though :).

Plus, even before I’ve freed up some space, I had ~2G free, which should not prevent me from creating an empty folder, right?

To be clear, I’m not trying to outtalk anyone, just trying to understand what’s going on here, because it doesn’t seem right to me ;x.

UPDATE: Regarding your earlier link, 5% seems like a lot ;|. Seems crazy to me that I would lost like 16GB from my 320GB drive (or like 50GB from my 1TB drive).

Running tune2fs -m 0 /dev/sda1 seems to have fixed it. Pi now reports there’s ~25G of free space on the disk, where my Mac reports 26.12GB, so that seems to be about right.

I’d like to understand why ext4 needs to reserve so much space, though? If I format the disk as HFS+ on my Mac, I do not loose 5% of space.

There was some discussion about this a while ago.

I’m still undecided. 5% makes sense on say, an 8GB card. Maybe we don’t need 5% on a much larger drive.

I agree.

Maybe implement some kind of tiers? Like: 5% if the disk/card is smaller than X, 2% if it’s smaller than Y, and 1% if it’s smaller than Z.

Or reserve 5%, but with a cap of maximum X, e.g.: 5GB. That way you would still have enough space for whatever is needed, even on a 8GB card, but it would only grow up to 5GB, so user would lose only 5GB regardless of whether they’re using a 100GB, 500GB or 1TB drive.

I think the 5% reserved space is important to keep on the root partition and the benefits of reducing it if someone uses, say, a 32 GB SD card are relatively small. I see the real issue as being more about what to do on large external drives. Short of modifying programs like mke2fs, I’d have thought we’re stuck with whatever it defaults to. There’s nothing to stop us creating a Wiki entry about changing disk parameters but people would need to read it.

I agree, reserve the space. I’m just not sure if going with percentage regardless of the disk size is the right choice here. It’s fine to reserve 5% on a 8/16/32GB SD card, but on a 1TB drive - not so much.

Here’s to hoping a better solution will come along in the near future :).