LVM1 and software RAID1+0

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi everyone. I recently setup a box running RedHat 8.0, which has LVM
1.0.3. The disks were 4xSCSI disks, /dev/sda - sdd. I got them setup in
a RAID1+0 setup, so

/dev/sda---|                              |---/dev/sdc
       raid1 /dev/md0---raid0---/dev/md1 raid1
/dev/sdb---|          /dev/md2            |---/dev/sdd

Ok so this was working great... I ran

pvcreate /dev/md2
vgcreate system /dev/md2
lvcreate .......

This all worked great, and I was able to mkfs on the lv's, and mount
them, use them, etc.

So I rebooted

vgscan, upon rebooting, refused to find any vg's. I ran 'pvscan' and it
showed /dev/md0 and /dev/md2 as  members of 'system'. [ding] the light
went on, I remembered that with LVM2 I had to create a device filter for
situations like this. 

So I searched, to no avail, for the same solution for LVM1. Could not
find it. The only way I've found around this is to wrap 'vgscan' with a
shell script that does this:

mv /dev/md0 /dev/md0.brb
vgscan $*
mv /dev/md0.brb /dev/md0

Which really sucks, and is totally hacky. It works, but I'm not happy
about it.

Is there anything I can do about this? I know LVM1, and especially 1.0.3
aren't exactly "current', but I still would like to see this documented
somewhere.

Thanks
-cb

_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

[Index of Archives]     [Gluster Users]     [Kernel Development]     [Linux Clusters]     [Device Mapper]     [Security]     [Bugtraq]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]

  Powered by Linux