On 30/11/20 10:31, Reindl Harald wrote: > since when is it broken that way? > > from where should that commandlien come from when the operating system > itself is on the for no vali dreason not assembling RAID? > > luckily the past few years no disks died but on the office server 300 > kilometers from here with /boot, os and /data on RAID1 this was not true > at least 10 years > > * disk died > * boss replaced it and made sure > the remaining is on the first SATA > port > * power on > * machine booted > * me partitioned and added the new drive > > hell it's and ordinary situation for a RAID that a disk disappears > without warning because they tend to die from one moment to the next > > hell it's expected behavior to boot from the remaining disks, no matter > RAID1, RAID10, RAID5 as long as there are enough present for the whole > dataset > > the only thing i expect in that case is that it takes a little longer to > boot when soemthing tries to wait until a timeout for the missing device > / componenzt > So what happened? The disk failed, you shut down the server, the boss replaced it, and you rebooted? In that case I would EXPECT the system to come back - the superblock matches the disks, the system says "everything is as it was", and your degraded array boots fine. EXCEPT THAT'S NOT WHAT IS HAPPENING HERE. The - fully functional - array is shut down. A disk is removed. On boot, reality and the superblock DISAGREE. In which case the system takes the only sensible route, screams "help!", and waits for MANUAL INTERVENTION. That's why you only have to force a degraded array to boot once - once the disks and superblock are back in sync, the system assumes the ops know about it. Cheers, Wol