There is a bug in lvm pvs causing us to not recognize VG's using PV's on mdraid, see bug 620745. This patch works arounds this. I've verified that this patch fixes the finding of a default Intel Firmware RAID install in rescue mode. If we need to actually apply this patch depends on if the lvm team can fix the underlying cause quickly or not. --- data/70-anaconda.rules | 2 ++ 1 files changed, 2 insertions(+), 0 deletions(-) diff --git a/data/70-anaconda.rules b/data/70-anaconda.rules index 9375081..3cc85e2 100644 --- a/data/70-anaconda.rules +++ b/data/70-anaconda.rules @@ -21,6 +21,8 @@ ENV{ID_FS_TYPE}=="isw_raid_member", IMPORT{program}="$env{ANACBIN}/mdadm --exami # probe metadata of LVM2 physical volumes ENV{ID_FS_TYPE}=="LVM2_member", IMPORT{program}="$env{ANACBIN}/lvm pvs --ignorelockingfailure --units k --nosuffix --nameprefixes --rows --unquoted --noheadings -opv_name,pv_uuid,pv_size,pv_pe_count,pv_pe_alloc_count,pe_start,vg_name,vg_uuid,vg_size,vg_free,vg_extent_size,vg_extent_count,vg_free_count,pv_count $tempnode" +# Work around lvm bug 620745 +ENV{ID_FS_TYPE}=="LVM2_member", KERNEL=="md*", IMPORT{program}="$env{ANACBIN}/lvm pvs --ignorelockingfailure --units k --nosuffix --nameprefixes --rows --unquoted --noheadings -opv_name,pv_uuid,pv_size,pv_pe_count,pv_pe_alloc_count,pe_start,vg_name,vg_uuid,vg_size,vg_free,vg_extent_size,vg_extent_count,vg_free_count,pv_count $tempnode" ENV{ID_FS_TYPE}=="LVM2_member", IMPORT{program}="$env{ANACBIN}/lvm pvs --ignorelockingfailure --units k --nosuffix --nameprefixes --rows --unquoted --noheadings -olv_name,lv_uuid,lv_size,lv_attr $tempnode" LABEL="anaconda_end" -- 1.7.0.1 _______________________________________________ Anaconda-devel-list mailing list Anaconda-devel-list@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/anaconda-devel-list