Hi List,Sorry but Janne is wrong : it's the primary OSD responsability to write to the secondary and third OSD. http://docs.ceph.com/docs/jewel/_images/ditaa-54719cc959473e68a317f6578f9a2f0f3a8345ee.png So the theorical bandwidth on 10Gb network is roughly 1GB/s not a third of that. And 1GB/s is what his NFS server is writing on its local RBD : The problem is a remote NFS client of this RBD share is only roughly getting a 1/5th of this 1GB/s bandwidthNFS server write bandwith on his rbd is 1196MB/s But when the share relies on a non-RBD, local disk, the client is getting 839MB/s :NFS client write bandwith on the rbd export is only 233MB/s. So the question is why an NFS server relying on "RBD storage" coulnd't offer all the bandwidth it has access to itself ?NFS client write bandwith on a "local-server-disk" export is 839MB/s Any experience is appreciated : what performance do you get with your RBD-NFS exports ? Frederic Janne Johansson <icepic.dz@xxxxxxxxx> a écrit le 18/06/18 15:07 :
|
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com