On 3/23/20 4:31 PM, Maged Mokhtar wrote:
On 23/03/2020 20:50, Jeff Layton wrote:
On Mon, 2020-03-23 at 15:49 +0200, Maged Mokhtar wrote:
Hello all,
For multi-node NFS Ganesha over CephFS, is it OK to leave libcephfs
write caching on, or should it be configured off for failover ?
You can do libcephfs write caching, as the caps would need to be
recalled for any competing access. What you really want to avoid is any
sort of caching at the ganesha daemon layer.
Hi Jeff,
Thanks for your reply. I meant caching by libcepfs used within the
ganesha ceph fsal plugin, which i am not sure from your reply if this is
what you refer to as ganesha daemon layer (or does the later mean the
internal mdcache in ganesha). I really appreciate if you can clarify
this point.
Caching in libcephfs is fine, it's caching above the FSAL layer that you
should avoid.
I really have doubts that it is safe to leave write caching in the
plugin and have safe failover, yet i see comments in the conf file such as:
# The libcephfs client will aggressively cache information while it
# can, so there is little benefit to ganesha actively caching the same
# objects.
Or is it up to the NFS client to issue cache syncs and re-submit writes
if it detects failover ?
Correct. During failover, NFS will go into it's Grace period, which
blocks new state, and allow the NFS clients to re-acquire the state
(opens, locks, delegations, etc.). This includes re-sending any
non-committed writes (commits will cause the data to be saved to the
cluster, not just the libcephfs cache). Once this is all done, normal
operation proceeds. It should be safe, even with caching in libcephfs.
Daniel
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx