Re: Patch to fix boot from RAID-1 partitioned arrays

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Heh, sounds analogous to a quirk in BTRFS where a additional 'hook' is required or "btrfs" included in MODULES= (apparently triggering some (e)udev rule) in the initrd configuration file. Most linux distros take care of this , but not ARCH (or derivatives).  Hate to even think of incorporating this into a kernel....

On 2021-05-12 6:56 a.m., Geoff Back wrote:

The problem is not with in-kernel assembly and starting of the array -
that works perfectly well.  However, when the 'md' driver assembles and
runs the partitionable array device (typically /dev/md_d0) it only
causes the array device itself to get registered with the block layer.
The assembled array is not then scanned for partitions until something
accesses the device, at which point the pending GD_NEED_PART_SCAN flag
in the array's block-device structure causes the probe for a partition
table to take place and the partitions to become accessible.

Without my patch, there does not appear to be anything between array
assembly and root mount that will access the array and cause the
partition probe operation.

To be clear, the situation is that the array has been fully assembled
and activated but the partitions it contains remain inaccessible.

Thanks,

Geoff.




[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux