On Sun, 2005-11-06 at 22:27 +0100, Marco Masotti wrote: > > ========================== > > Date: Sun, 06 Nov 2005 15:33:09 +1000 > > From: Sean Boyd <sboyd@xxxxxxxxxx> > > To: linux clustering <linux-cluster@xxxxxxxxxx> > > Cc: masotti@xxxxxxxxx > > Subject: Re: Error locking on node, Internal > > lvm error, when creating logical volume > > ========================== > > [...] > > Thank you for your reply. > It looks like I missed asking myself the question "where is your underlying network block device system?". Banal as it may seem, actually it was not there yet and the two nodes were not sharing any data blocks! As a result, the volume group was not found on the other related cluster member. Eventually, if deemed anyhow useful, that may be added as hint in some future troubleshooting guide. > > In my revised setup, the data blocks were then supplied using an iSCSI-based San, with target and initiator ( open-iscsi and I-scsi Enterprise Target) respectively still running on the two cluster's virtual machines. > > It is worth saying that performances were pretty good on this 366MHz dual celeron physical host, with dd=/dev/zero of=./gfs/somedata bs=1M count=4096 performing roughly at 8MByte/s > > > > > This may have to do with the fact that kpartx is not integrated > > into the > > rc.sysinit script. Therefore there are no mapped partitions in > > the /etc/lvm/.cache. > > > > Make sure your lvm.conf has filters for the disks associated > > with the dm > > device. > > > > Make sure you have run kpartx -a (to map dm partitions) then > > restart > > the clvm daemon (to populate the cache with the partitions). > > > > This is an issue with RHEL4 U2. I haven't checked FC4. > > > > --Sean > > > > I cannot find any kpartx executable in my software loads, can you please tell which of them is it part of? I may have assumed too much. You're not multipathing to the storage are you? In any event l have seen this issue when the underlying devices weren't in the /etc/lvm/.cache file. Once the block devices and their partitions were configured correctly a restart of clvm populated the lvm cache and fixed the issue. HTH --Sean -- Linux-cluster@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/linux-cluster