On 08/03/2015 06:31 AM, jupiter wrote:
Hi,
I'd like to deploy Cephfs in a cluster, but I need to have a performance
report compared with Lustre and Gluster. Could anyone point me documents
/ links for performance between CephFS, Gluster and Lustre?
Thank you.
Kind regards,
- j
Hi,
I don't know that anything like this really exists yet to be honest. We
wrote a paper with ORNL several years ago looking at Ceph performance on
a DDN SFA10K and basically saw that we could hit about 6GB/s with CephFS
while Lustre could do closer to 11GB/s. Primarily that was due to the
journal on the write side (using local SSDs for journal would have
improved things dramatically as the limitation was the IB connections
between the SFA10K and the OSD nodes rather than the disks). On the
read side we ended up running out of time to figure it out. We could do
about 8GB/s with RADOS but CephFS was again limited to about 6GB/s.
This was several years ago now so things may have changed.
In general you should expect that Lustre will probably be faster for
large sequential writes (especially if you use Ceph replication vs RAID6
for Lustre) and may be faster for large sequential reads. For small IO
I suspect that Ceph may do better, and for metadata I would expect the
situation will be mixed with Ceph faster at some things but possibly
slower at others since afaik we haven't done a lot of tuning of the MDS yet.
Mark
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com