On 30-11-2020 14:53, Reindl Harald wrote:
Am 30.11.20 um 14:47 schrieb antlists:
On 30/11/2020 13:16, Reindl Harald wrote:
Am 30.11.20 um 14:11 schrieb antlists:
On 30/11/2020 12:13, Reindl Harald wrote:
but i fail to see the difference and to understand why reality and
superblock disagree,
In YOUR case the array was degraded BEFORE shutdown. In the OP's
case, the array was degraded AFTER shutdown
no, no and no again!
* the array is full opertional
* smartd fires a warning
Ahhh ... but you said in your previous post(s) "the disk died". Not
that it was just a warning.
* the machine is shut down
* after that the drive is replaced
* so the array get degraded AFTER shutdown
* at power-on RAID partitions are missing
But we've had a post in the last week or so of someone who's array
behaved exactly as I described. So I wonder what's going on ...
I need to get my test system up so I can play with these sort of
things...
and that's why i asked since when it's that broken
i expect a RAID simply coming up as nothign happened as long there are
enough disks remaining to have the complete dataset
it's also not uncommon that a disk dies between power-cycles aka
simply don't come up again which is the same as replace it when the
machine is powered off
i replaced a ton of disks in Linux RAID1/RAID10 setups over the years
that way and in some cases i cloed machines by put 2 out of 4 RAID10
disks in a new machine and insert 2 blank disks in both
* spread the disks between both machines
* power on
* login via SSH
* start rebuild the array on both
* change hostname and network config of one
for me it's an ordinary event a RAID has to cope without interaction
*before* boot to a normal OS state and if it doesn't it's a serious bug
Same thing here...
which is also why i am saying that in addition to the normal behavior of
debian initrd, i think the OP has made another mistake. This should just
work.