On 02/24/2015 04:21 PM, Kevin Walker wrote:
Hi All Just recently joined the list and have been reading/learning about ceph for the past few months. Overall it looks to be well suited to our cloud platform but I have stumbled across a few worrying items that hopefully you guys can clarify the status of. Reading through various mailing list archives, it would seem an OSD caps out at about 3k IOPS. Dieter Kasper from Fujistu made an interesting observation about the size of the OSD code(20k plus lines at that time), is this being optimized further and has this IOPS limit been improved in Giant?
In recent tests under fairly optimal conditions, I'm seeing performance topping out at about 4K object writes/s and 22K object reads/s against an OSD with a very fast PCIe SSD. There are several reasons writes are slower than reads, but this is something we are working on improving in a variety of ways.
I believe others may have achieved even higher results.
Is there a way to over come the XFS fragmentation problems other users have experienced?
Setting the newish filestore_xfs_extsize parameter to true appears to help in testing we did a couple months ago. We filled up a cluster to near capacity (~70%) and then did 12 hours of random writes. After the test completed, with filestore_xfs_extsize disabled we were seeing something like 13% fragmentation, while with it enabled we were seeing around 0.02% fragmentation.
Kind regards Kevin _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com