Having duplicate PV problems, think there's a bug in LVM2 md component detection

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I'm sorry if this is a FAQ or if I'm being stupid. I saw some mentions to this problem on the old mailing list, but it didn't seem to quite cover what I'm seeing, and I don't see an archive for this list yet. (and what on earth happened to the old list, anyway?)

My problem is this: I'm setting up a software RAID5 across 5 IDE drives. I'm running Debian Unstable, using kernel 2.6.8-2-k7. I HAVE set md_component_detection to 1 in lvm.conf, and I wiped the drives after changing this setting.

I originally set it up as a four-drive RAID, via a 3Ware controller, so my original devices were sdb, sdc, sdd, and sde. (the machine also has a hardware raid on an ICP Vortex SCSI controller: this is sda.) In this mode, it set up and built perfectly. LVM worked exactly as I expected it to. I had a test volume running. All the queries and volume management worked exactly correctly. All was well.

So then I tried to add one more drive via the motherboard IDE controller, on /dev/hda. (note that I stopped the array, wiped the first and last 100 megs on the drives, and rebuilt. ). That's when the problems started. The RAID itself seems to build and work just fine, although I haven't waited for the entire 6 or so hours it will take to completely finish. Build speed is good, everything seems normal. But LVM blows up badly in this configuration.

When I do a pvcreate on /dev/md0, it succeeds... but if I do a pvdisplay I get a bunch of complaints:

jeeves:/etc/lvm# pvdisplay
Found duplicate PV y8pYTtAg0W703Sc8Wiy79mcWU3gHmCFc: using /dev/sde not /dev/hda
Found duplicate PV y8pYTtAg0W703Sc8Wiy79mcWU3gHmCFc: using /dev/sde not /dev/hda
--- NEW Physical volume ---
PV Name /dev/hda
VG Name
PV Size 931.54 GB
Allocatable NO
PE Size (KByte) 0
Total PE 0
Free PE 0
Allocated PE 0
PV UUID y8pYTt-Ag0W-703S-c8Wi-y79m-cWU3-gHmCFc


It seems to think that /dev/hda is where the PV is, rather than /dev/md0.

(Note, again, I *HAVE* turned the md_component_detection to 1 in lvm.conf!!)

I have erased, using dd, the first and last 100 megs or so on every drive, and I get exactly the same results every time... even with all RAID and LVM blocks erased, if I use this list of drives:

/dev/hda
/dev/sdb
/dev/sdc
/dev/sdd
/dev/sde

with the linux MD driver, LVM does not seem to work properly. I think the component detection is at least a little buggy. This is what my /proc/mdstat looks like:

jeeves:/etc/lvm# cat /proc/mdstat
Personalities : [raid5]
md0 : active raid5 sde[5] sdd[3] sdc[2] sdb[1] hda[0]
976793600 blocks level 5, 128k chunk, algorithm 2 [5/4] [UUUU_]
[=>...................] recovery = 6.2% (15208576/244198400) finish=283.7min speed=13448K/sec
unused devices: <none>


I realize that using both IDE and SCSI drives in the same array is unusual... but I'm not really using SCSI drives, they just look like that because of the 3Ware controller.

Again, this works FINE as long as I just use the (fake) SCSI devices.. it doesn't wonk out until I add in /dev/hda.

Any suggestions?  Is this a bug?

_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

[Index of Archives]     [Gluster Users]     [Kernel Development]     [Linux Clusters]     [Device Mapper]     [Security]     [Bugtraq]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]

  Powered by Linux