After a 3TB disk failure (and loss of precious library), i’ve decided for a raid1 array.
Raid1 works, if I assemble it and mount it manually (mdadm --assemble --scan). However, when I reboot rpi3, it won’t assemble itself, and therefore won’t be mounted by fstab.
I have a feeling it has something to do with starting service.
At the end of mdadm installation it gives me error:
update-rc.d: warning: start and stop actions are no longer supported; falling back to defaults
If I run service mdadm-raid restart after ssh-ing to rpi3 it works. If I put same command in cron and run it as @reboot, it doesn’t work.
Here’s mdadm.con:
‘#’ mdadm.conf
‘#’ Please refer to mdadm.conf(5) for information about this file.
'# by default (built-in), scan all partitions (/proc/partitions) and all
'# containers for MD superblocks. alternatively, specify devices to scan, using
'# wildcards if desired.
'#DEVICE partitions containers
DEVICE partitions
'# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes
'# automatically tag new arrays as belonging to the local system
HOMEHOST
'# instruct the monitoring daemon where to send mail alerts
MAILADDR root
'# definitions
of existing MD arrays
ARRAY /dev/md/0 metadata=1.2 UUID=2a3a1d20:047e02dd:0b6c6e95:115313ec name=localhost.localdomain:0
'# This configuration was auto-generated on Sun, 27 Mar 2016 16:56:25 +0200 by mkconf
Halp 
Does
modprobe raid1
mdadm --assemble --scan --verbose
Work at all? What’s the output?
Also make sure /dev/md gets created, sometimes it says it works, but it hasn’t
Here’s the output:
root@osmc:/home/osmc# modprobe raid1
root@osmc:/home/osmc# mdadm --assemble --scan --verbose
mdadm: looking for devices for /dev/md/0
mdadm: no RAID superblock on /dev/sdb
mdadm: no RAID superblock on /dev/sda
mdadm: no RAID superblock on /dev/mmcblk0p2
mdadm: no RAID superblock on /dev/mmcblk0p1
mdadm: no RAID superblock on /dev/mmcblk0
mdadm: no RAID superblock on /dev/ram15
mdadm: no RAID superblock on /dev/ram14
mdadm: no RAID superblock on /dev/ram13
mdadm: no RAID superblock on /dev/ram12
mdadm: no RAID superblock on /dev/ram11
mdadm: no RAID superblock on /dev/ram10
mdadm: no RAID superblock on /dev/ram9
mdadm: no RAID superblock on /dev/ram8
mdadm: no RAID superblock on /dev/ram7
mdadm: no RAID superblock on /dev/ram6
mdadm: no RAID superblock on /dev/ram5
mdadm: no RAID superblock on /dev/ram4
mdadm: no RAID superblock on /dev/ram3
mdadm: no RAID superblock on /dev/ram2
mdadm: no RAID superblock on /dev/ram1
mdadm: no RAID superblock on /dev/ram0
mdadm: /dev/sdb1 is identified as a member of /dev/md/0, slot 0.
mdadm: /dev/sda1 is identified as a member of /dev/md/0, slot 1.
mdadm: added /dev/sda1 to /dev/md/0 as 1
mdadm: added /dev/sdb1 to /dev/md/0 as 0
mdadm: /dev/md/0 has been started with 2 drives.
/dev/md gets created everytime I manually run mdadm --assemble --scan, and I can also mount it:
root@osmc:/home/osmc# mount /dev/md
md/ md0
root@osmc:/home/osmc# mount /dev/md0 /mnt/zunanji3T/
root@osmc:/home/osmc# ls -lha /mnt/zunanji3T/
total 108K
drwxrwxrwx 7 999 999 4.0K Feb 7 09:46 .
drwxr-xr-x 3 root root 4.0K Mar 27 16:01 …
drwxrwxrwx 2 osmc osmc 16K Feb 6 10:46 lost+found
drwxrwxrwx. 117 osmc osmc 12K Mar 22 17:28 moviz
drwxrwxrwx. 239 osmc osmc 44K Mar 19 06:32 p
drwxrwxrwx. 10 osmc osmc 4.0K Mar 6 06:04 series
drwxrwxrwx. 2 osmc osmc 4.0K Feb 6 21:57 test
I think that smthing happens during bootup, that prevents mdadm from assembling array, but I lack of knowledge how to fix it…
Check the service runs
Otherwise, could be spin-up time related.
- mdadm-raid.service - LSB: MD array assembly
Loaded: loaded (/etc/init.d/mdadm-raid)
Active: active (exited) since Sun 2016-03-27 22:39:50 CEST; 6min ago
Process: 200 ExecStart=/etc/init.d/mdadm-raid start (code=exited, status=0/SUCCESS)
However raid isn’t assembled. If i start service manually, it works.
I’ve also tried to put Required-Start: $all
, so it would start as last service, but no luck…
If service strats correctly, service’s status should be:
- mdadm-raid.service - LSB: MD array assembly
Loaded: loaded (/etc/init.d/mdadm-raid)
Active: active (exited) since Sun 2016-03-27 22:51:44 CEST; 17s ago
Process: 742 ExecStart=/etc/init.d/mdadm-raid start (code=exited, status=0/SUCCESS)
Mar 27 22:51:44 osmc mdadm-raid[742]: Assembling MD array md0…done (started [2/2]).
Mar 27 22:51:44 osmc mdadm-raid[742]: Generating udev events for MD arrays…done.
What am I missing?
Nope, not sure. I would check udevadm as well. What happens if you plug the disks in after booting?
S
ACcording to syslog, mdadm service runs before udisks-glue, that’s probably why it doesn’t work. Now I have to figure out how to sort this…
Suggestions would be greatly appreciated…
bashing head against keyboard (I went on a second page of a google results!) I finaly got it to work.
As you suggested, @sam_nazarko, about spin time, it gave me something to think. So after thorough research, and scripting, when I almost gave up, I tried last thing: putting rootdelay=10 into cmdline.txt
… It works…
Thank you… 