On Wed, Nov 25, 2009 at 09:30:17AM -0800, Ray Van Dolson wrote: > Noticed that on bootup of a cluster node usings cLVM and GFS2 I see the > following: > > Scanning logical volumes > connect() failed on local socket: Connection refused > WARNING: Falling back to local file-based locking. > Volume Groups with the clustered attribute will be inaccessible. > Reading all physical volumes. This may take a while... > Found volume group "VolGroup00" using metadata type lvm2 > Activating logical volumes > connect() failed on local socket: Connection refused > WARNING: Falling back to local file-based locking. > Volume Groups with the clustered attribute will be inaccessible. > 2 logical volume(s) in volume group "VolGroup00" now active > > VolGroup00 is a vg on the local disk and is not shared. > > Later, cluster services start (including clvmd) and the clustered > volumes and associated filesystems come up fine. > > I'm assuming the errors above are because clvmd isn't running. Can > they be safely ignored? Any way to configure so my clustered volumes > aren't scanned until after clvmd starts? I know I can edit filter > settings in lvm.conf, but don't see any way to specify that some block > devices should be skipped until clvmd is running. > Hmm, thinking out loud here. I am using multipath, and I'm seriously doubting multipath is running when the errors above are spit out. My filter is as follows for lvm.conf: filter = [ "a|/dev/mapper/.*|", "a|/dev/hd[a-z].*|", "r|/dev/sd[a-z].*|" ] Which should be filtering out my FC attached devices in lieu of stuff under /dev/mapper. So I guess maybe the fact that I have locking_type 3 set, my local hard drive (non-clustered) is getting scanned and clustered locking is attempted with it which fails. So in the end it's probably OK that it falls back to local file locking and I should just ignore these "errors". Ray -- Linux-cluster mailing list Linux-cluster@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/linux-cluster