Hello Ceph Users, I am trialing CephFS / Ganesha NFS for VMWare usage. We are on Mimic / Centos 7.7 / 130 x 12TB 7200rpm OSDs / 13 hosts / 3 replica. So far the read performance has been great. The write performance ( NFS sync ) hasn’t been great. We use a lot of 64KB NFS read / writes and the latency is around 50-60ms from esxtop. I have been benchmarking different CephFS block / stripe sizes but would like to hear what others have settled on? The default 4MB / 1 stripe doesn’t seem to give great 64KB performance. I would also like to know if I am experiencing PG locking but haven’t found a way to do that. Glen |
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com