Re: nolock and dlm nodes in the same cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

On Thu, 2011-01-13 at 22:39 +0100, Sven Karlsson wrote:
> G'day,
> 
> We have a GFS2 cluster on a fibre-SAN with three machines, of which
> one machine is used for remote backups.
> 
> The cluster contains a lot of small files, and the backup operation
> takes about a day to complete. When investigating, we found that the
> major performance bottleneck was the file locking operations. We
> stopped the cluster and mounted the backup-node with the lock_nolock
> option, and now backups were blazing.
> 
> After careful consideration of the nolock-warning in the documentation
> (i.e. corruption and kernel panics may happen), I wonder if that is
> still the case if spectator mode is used?
> 
Spectator mode is bascially the same as a mount of a read only block
device. It is intended to allow nodes which will never write to the
block device to mount the file system read only without the requirement
for a journal.

Although there is nothing to stop you using it in conjunction with
nolock it was not intended to be used that way and it doesn't make sense
to do that.

> Or is there some other options that are available? The files will not
> be modified by the other nodes during this time, so there is no actual
> need for file-level locking... but perhaps the DLM is also handling
> meta-data and other locking that is necessary and it is therefore not
> possible to use nolock?
> 
> /Sven
> 
The dlm does not handle metadata, it only deals with the locking. Is the
issue really the dlm, or the fact that there were other nodes mounted
during the backup process?

There is no harm in unmounting the cluster filesystem on all nodes and
then mounting it on exactly one node with lock_nolock to back it up. The
only issue is that you have to be very careful in the commands that you
issue in order to be certain that it has not been left accidentally
mounted on one of the cluster nodes.

I suspect though, that it is not the dlm itself, but the overheads of
passing the locks from the other nodes that is the issue here,

Steve.

> --
> Linux-cluster mailing list
> Linux-cluster@xxxxxxxxxx
> https://www.redhat.com/mailman/listinfo/linux-cluster


--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster


[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux