Steven Whitehouse wrote:
As soon as you mix creation/deletion on one node with accesses (of
whatever kind) from other nodes, you run this risk.
_ALL_ the GFS2 filesystems (bar one 5Gb one for common config files,
etc) are mounted one-node-only.
_ALL_ the GFS2 filesystems (with the same exception) are NFS exported.
NONE of the NFS exported filesystems have local processes accessing them
except for backups(*) - because there's distinct and non-theoretrical
risk of file coruption if anything other than NFSd touches a
NFS-exported filesystem (We've experienced it and I've reproduced the
corruption on non-cluster systems).
Even Samba is a re-export from a NFS client and I've been toying with
the idea of moving backups to a NFS client despite the network penalties.
(*) Backups run on the node where the filesystem is NFS exported.
Obviously you wouldn't be using a cluster filesystem if you didn't intend to have this
kind of access from time to time, but anything that can be done at the
application level to help improve locality will pay big dividends
compared with any tuning that can be done at the fs/dlm level.
We originally installed this to run as pNFS/SAMBA/iscsi fileservers but
after encountering the NFS corruption issues and finding out just how
much slower it gets if other nodes mount/access the filesystems we just
use GFS to ensure corruption-free failover.
Let's just say that what was promised was not what was delivered...
--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster