Problem with ext4 external USB drive

Hi,

I have a problem with external USB drive formatted with ext4.
When I format it and then reboot OSMC, it is always mounted read-only, because of file system errors.
Messages in dmesg are the same as described here: issue raspberry kernel

Is there something I can do so that filesystem is not corrupted every reboot and drive is mounted rw? Is there some workaround for this?

Thanks in advance.

I assume, from the way you’ve worded it, that you tried correcting it with fsck.ext4 ?
Derek

If it’s all the same to you, can you provide a copy of your dmesg errors so we can see for ourselves whether it is the same issue ?

So what happens if you unmount the drive, scan it with fsck then reboot, does it go read only again ?

umount /media/mydrive
sudo fsck /dev/sda1

(Assuming your drive is called mydrive, and that the partition is sda1 - check with mount first)

If there is still a problem please post your dmesg:

dmesg | paste-log

Hi,

sorry for such late response.
After unmounting, fixing and rebooting, here are the logs: logs

Anybody has some idea?

It looks like it may be a hardware problem. Check this thread: ext4 - "Journal has aborted"? [SOLVED] / Kernel & Hardware / Arch Linux Forums

A clip from your logs:

[   14.078960] sd 0:0:0:0: [sda] UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08
[   14.078988] sd 0:0:0:0: [sda] Sense Key : 0x5 [current] 
[   14.079002] sd 0:0:0:0: [sda] ASC=0x24 ASCQ=0x0 
[   14.079021] sd 0:0:0:0: [sda] CDB: opcode=0x2a 2a 08 1d 04 08 20 00 00 08 00
[   14.079037] blk_update_request: critical target error, dev sda, sector 486803488
[   14.079054] blk_update_request: critical target error, dev sda, sector 486803488
[   14.079103] Aborting journal on device sda1-8.
[   14.779255] EXT4-fs error (device sda1): ext4_journal_check_start:56: Detected aborted journal
[   14.779295] EXT4-fs (sda1): Remounting filesystem read-only

So it’s possibly the external USB enclosure that you’re using.

ok,

I used a different enclosure. After that, there is no fs error after reboot.

But when I try to copy some large file to the drive, under heavy load, process fails with following: error logs.

I am not sure, that both enclosures are damaged, and they work pretty well with drive formatted with NTFS. Is it really and enclosure problem? My hope is lost to get this drive working with RPi2…

What makes you sure the drive is not faulty ?

There are loads of write errors before the kernel panic in your log:

[34212.490846] blk_update_request: I/O error, dev sda, sector 22011040
[34212.490862] EXT4-fs warning (device sda1): ext4_end_bio:332: I/O error -5 writing to inode 29097988 (offset 5242880000 size 3973120 starting block 2751410)
[34212.491030] EXT4-fs warning (device sda1): __ext4_read_dirblock:970: error -5 reading directory block (ino 29097985, block 0)
[34212.520296] EXT4-fs warning (device sda1): ext4_end_bio:332: I/O error -5 writing to inode 29097988 (offset 5242880000 size 3973120 starting block 2751440)
[34212.520482] EXT4-fs warning (device sda1): ext4_end_bio:332: I/O error -5 writing to inode 29097988 (offset 5242880000 size 3973120 starting block 2751470)
[34212.520702] EXT4-fs warning (device sda1): ext4_end_bio:332: I/O error -5 writing to inode 29097988 (offset 5242880000 size 3973120 starting block 2751500)
[34212.520902] EXT4-fs warning (device sda1): ext4_end_bio:332: I/O error -5 writing to inode 29097988 (offset 5242880000 size 3973120 starting block 2751530)
[34212.521088] EXT4-fs warning (device sda1): ext4_end_bio:332: I/O error -5 writing to inode 29097988 (offset 5242880000 size 3973120 starting block 2751560)
[34212.521267] EXT4-fs warning (device sda1): ext4_end_bio:332: I/O error -5 writing to inode 29097988 (offset 5242880000 size 3973120 starting block 2751590)
[34212.521444] EXT4-fs warning (device sda1): ext4_end_bio:332: I/O error -5 writing to inode 29097988 (offset 5242880000 size 3973120 starting block 2751620)

Seems like the drive may have some bad sectors or otherwise be faulty in some way.

Ok, can I check it somehow? I also read that moving bad blocks is done internally by drive hardware nowadays, and it seemed to work fine when formatted with exFAT?

You can use fsck.ext4 to check the drive.

My advice: Back up everything you can from the drive before you do any extensive testing on it.

Ok,

I executed

badblocks -wsv -o badblocks.txt /dev/sdb1
and there were no bad blocks.

Is there something else I can check to verify the drive?

I was thinking something like this:

fsck.ext4 -ycv /dev/sdb1