Re: multi-node NFS Ganesha + libcephfs caching

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 23/03/2020 20:50, Jeff Layton wrote:
On Mon, 2020-03-23 at 15:49 +0200, Maged Mokhtar wrote:
Hello all,

For multi-node NFS Ganesha over CephFS, is it OK to leave libcephfs write caching on, or should it be configured off for failover ?

You can do libcephfs write caching, as the caps would need to be
recalled for any competing access. What you really want to avoid is any
sort of caching at the ganesha daemon layer.

Hi Jeff,

Thanks for your reply. I meant caching by libcepfs used within the ganesha ceph fsal plugin, which i am not sure from your reply if this is what you refer to as ganesha daemon layer (or does the later mean the internal mdcache in ganesha). I really appreciate if you can clarify this point.

I really have doubts that it is safe to leave write caching in the plugin and have safe failover, yet i see comments in the conf file such as:
# The libcephfs client will aggressively cache information while it
# can, so there is little benefit to ganesha actively caching the same
# objects.

Or is it up to the NFS client to issue cache syncs and re-submit writes if it detects failover ?

Appreciate your help.  /Maged
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux