> Hi, > As http://docs.ceph.com/docs/master/cephfs/nfs/ says, it's OK to > config active/passive NFS-Ganesha to use CephFs. My question is if we > can use active/active nfs-ganesha for CephFS. (Apologies if you get two copies of this. I sent an earlier one from the wrong account and it got stuck in moderation) You can, with the new rados-cluster recovery backend that went into ganesha v2.7. See here for a bit more detail: https://jtlayton.wordpress.com/2018/12/10/deploying-an-active-active-nfs-cluster-over-cephfs/ ...also have a look at the ceph.conf file in the ganesha sources. > In my thought, only state consistance should we think about. > 1. Lock support for Active/Active. Even each nfs-ganesha sever mantain > the lock state, the real lock/unlock will call > ceph_ll_getlk/ceph_ll_setlk. So Ceph cluster will handle the lock > safely. > 2. Delegation support Active/Active. It's similar question 1, > ceph_ll_delegation will handle it safely. > 3. Nfs-ganesha cache support Active/Active. As > https://github.com/nfs-ganesha/nfs-ganesha/blob/next/src/config_samples/ceph.conf > describes, we can config cache size as size 1. > 4. Ceph-FSAL cache support Active/Active. Like other CephFs client, > there is no issues for cache consistance. The basic idea with the new recovery backend is to have the different NFS ganesha heads coordinate their recovery grace periods to prevent stateful conflicts. The one thing missing at this point is delegations in an active/active configuration, but that's mainly because of the synchronous nature of libcephfs. We have a potential fix for that problem but it requires work in libcephfs that is not yet done. Cheers, -- Jeff Layton <jlayton@xxxxxxxxxxxxxxx> _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com