Hi all We enabled CephFS on our Ceph Cluster consisting of: - 3 Monitor servers - 2 Metadata servers - 24 OSD (3 OSD / Server) - Spinning disks, OSD Journal is on SSD - Public and Cluster Network separated, all 1GB - Release: Jewel 10.2.3 With CephFS we reach roughly 1/3 of the write performance of RBD. There are some other discussions about RBD outperforming CephFS on the mailing list. However it would be interesting to have more figures about that topic. Writes on CephFS: # dd if=/dev/zero of=/data_cephfs/testfile.dd
bs=50M count=1 oflag=direct #dd if=/dev/zero of=/data_cephfs/testfile.dd
bs=500M count=1 oflag=direct # dd if=/dev/zero of=/data_cephfs/testfile.dd
bs=1000M count=1 oflag=direct Writes on RBD # dd if=/dev/zero of=/data_rbd/testfile.dd bs=50M
count=1 oflag=direct # dd if=/dev/zero of=/data_rbd/testfile.dd
bs=500M count=1 oflag=direct # dd if=/dev/zero of=/data_rbd/testfile.dd
bs=1000M count=1 oflag=direct Are these measurements reproducible by others ? Thanks for
sharing your experience! martin |
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com