Re: “root account locked” after removing one RAID1 hard disc

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




Am 30.11.20 um 12:10 schrieb Rudy Zijlstra:

On 30-11-2020 11:31, Reindl Harald wrote:


Am 30.11.20 um 10:27 schrieb antlists:
I read that a single RAID1 device (the second is missing) can be accessed without any problems. How can I do that?

When a component of a raid disappears without warning, the raid will refuse to assemble properly on next boot. You need to get at a command line and force-assemble it

since when is it broken that way?

from where should that commandlien come from when the operating system itself is on the for no vali dreason not assembling RAID?

luckily the past few years no disks died but on the office server 300 kilometers from here with /boot, os and /data on RAID1 this was not true at least 10 years

* disk died
* boss replaced it and made sure
  the remaining is on the first SATA
  port
* power on
* machine booted
* me partitioned and added the new drive

hell it's and ordinary situation for a RAID that a disk disappears without warning because they tend to die from one moment to the next

hell it's expected behavior to boot from the remaining disks, no matter RAID1, RAID10, RAID5 as long as there are enough present for the whole dataset

the only thing i expect in that case is that it takes a little longer to boot when soemthing tries to wait until a timeout for the missing device / componenzt


The behavior here in the post is rather debian specific. The initrd from debian refuses to continue  if it cannot get all partitions mentioned in the fstab.

that is normal behavior but don't apply to a RAID with a missing device, that's the R in RAID about :-)

On top i suspect an error in the initrd that the OP is using which leads to the raid not coming up with a single disk.

The problems from the OP have imho not much to do with raid, and a lot with debian specific issues/perhaps a mistake from the OP

good to know, on Fedora i am used not to care about missing RAID devices as long there are enough remaining

there is some timeout which takes boot longer than usual but at the end the machines are coming up as usual, mdmonitor fires a mail whining about degraded RAID adn that's it

that behavior makes the difference a trained monkey can replace the dead disk and the rest is done by me via ssh or having real trouble needing physical precence

typically fire up my "raid-repair.sh" telling the script source and target disk for cloning partition table, mbr and finally add the new partitions to start the rebuild

[root@srv-rhsoft:~]$ df
Dateisystem    Typ  Größe Benutzt Verf. Verw% Eingehängt auf
/dev/md1       ext4   29G    7,8G   21G   28% /
/dev/md2       ext4  3,6T    1,2T  2,4T   34% /mnt/data
/dev/md0       ext4  485M     48M  433M   10% /boot

[root@srv-rhsoft:~]$ cat /proc/mdstat
Personalities : [raid10] [raid1]
md1 : active raid10 sdc2[6] sdd2[5] sdb2[7] sda2[4]
      30716928 blocks super 1.1 256K chunks 2 near-copies [4/4] [UUUU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

md2 : active raid10 sdd3[5] sdb3[7] sdc3[6] sda3[4]
      3875222528 blocks super 1.1 512K chunks 2 near-copies [4/4] [UUUU]
      bitmap: 2/29 pages [8KB], 65536KB chunk

md0 : active raid1 sdc1[6] sdd1[5] sdb1[7] sda1[4]
      511988 blocks super 1.0 [4/4] [UUUU]

unused devices: <none>



[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux