Re: CephFS Performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



If I'm reading your cluster diagram correctly, I'm seeing a 1gbps interconnect, presumably cat6. Due to the additional latency of performing metadata operations, I could see cephfs performing at those speeds. Are you using jumbo frames? Also are you routing? 

If you're routing, the router will introduce additional latency that an l2 network wouldn't experience. 

On May 9, 2017 12:01 PM, "Webert de Souza Lima" <webert.boss@xxxxxxxxx> wrote:
Hello all,

I'm been using cephfs for a while but never really evaluated its performance.
As I put up a new ceph cluster, I though that I should run a benchmark to see if I'm going the right way.

By the results I got, I see that RBD performs a lot better in comparison to cephfs.

The cluster is like this:
 - 2 hosts with one SSD OSD each.
       this hosts have 2 pools: cephfs_metadata and cephfs_cache (for cache tiering).
 - 3 hosts with 5 HDD OSDs each.
      this hosts have 1 pool: cephfs_data.

all details, cluster set up and results can be seen here:  https://justpaste.it/167fr

I created the RBD pools the same way as the CEPHFS pools except for the number of PGs in the data pool.

I wonder why that difference or if I'm doing something wrong.

Regards,

Webert Lima
DevOps Engineer at MAV Tecnologia
Belo Horizonte - Brasil

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux