Re: “root account locked” after removing one RAID1 hard disc

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 30/11/2020 12:13, Reindl Harald wrote:


Am 30.11.20 um 13:00 schrieb Wols Lists:
On 30/11/20 10:31, Reindl Harald wrote:
since when is it broken that way?

from where should that commandlien come from when the operating system
itself is on the for no vali dreason not assembling RAID?

luckily the past few years no disks died but on the office server 300
kilometers from here with /boot, os and /data on RAID1 this was not true
at least 10 years

* disk died
* boss replaced it and made sure
   the remaining is on the first SATA
   port
* power on
* machine booted
* me partitioned and added the new drive

hell it's and ordinary situation for a RAID that a disk disappears
without warning because they tend to die from one moment to the next

hell it's expected behavior to boot from the remaining disks, no matter
RAID1, RAID10, RAID5 as long as there are enough present for the whole
dataset

the only thing i expect in that case is that it takes a little longer to
boot when soemthing tries to wait until a timeout for the missing device
/ componenzt

So what happened? The disk failed, you shut down the server, the boss
replaced it, and you rebooted?

in most cases smartd shouts a warning, the machine is powered down *without* remove the partitions from the RAID devices

And? The partitions have nothing to do with it.

The disk failed, the system was shut down, THE SUPERBLOCK WAS UPDATED!

the disk with SMART alerts is replaced by a blank, unpartitioned one

the remaining disk is made to be sure on the first SATA so that the first disk found by the BIOS is not the new blank one

In that case I would EXPECT the system to come back - the superblock
matches the disks, the system says "everything is as it was", and your
degraded array boots fine.

correct, RAID comes up degraded

EXCEPT THAT'S NOT WHAT IS HAPPENING HERE.

The - fully functional - array is shut down.

A disk is removed.

On boot, reality and the superblock DISAGREE. In which case the system
takes the only sensible route, screams "help!", and waits for MANUAL
INTERVENTION.

but i fail to see the difference and to understand why reality and superblock disagree,

In YOUR case the array was degraded BEFORE shutdown. In the OP's case, the array was degraded AFTER shutdown.

it shouldn't matter how and when a disk is removed, it's not there, so what as long as there are enough disks to bring the array up

FFS - how on earth is the system supposed to update the superblock, if it's SWITCHED OFF. !?!?

in my case the fully functional array is shutdown too by shutdown the machine and after that one disk is replaced and when the RAID comes up there is a disk logically missing because on it's place is a blank one without any partitions

That's why you only have to force a degraded array to boot once - once
the disks and superblock are back in sync, the system assumes the ops
know about it.
still don't get how that happens and why

Just ask yourself this simple question. "Did the array change state BETWEEN SHUTDOWN AND BOOT?". In *your* case the answer is "no", in the OP's case it is "yes". And THAT is what matters - if the array is degraded at boot, but was fully functional at shutdown, the raid system screams for help.

Cheers,
Wol



[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux