CephFS Performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello all,

I'm been using cephfs for a while but never really evaluated its performance.
As I put up a new ceph cluster, I though that I should run a benchmark to see if I'm going the right way.

By the results I got, I see that RBD performs a lot better in comparison to cephfs.

The cluster is like this:
 - 2 hosts with one SSD OSD each.
       this hosts have 2 pools: cephfs_metadata and cephfs_cache (for cache tiering).
 - 3 hosts with 5 HDD OSDs each.
      this hosts have 1 pool: cephfs_data.

all details, cluster set up and results can be seen here:  https://justpaste.it/167fr

I created the RBD pools the same way as the CEPHFS pools except for the number of PGs in the data pool.

I wonder why that difference or if I'm doing something wrong.

Regards,

Webert Lima
DevOps Engineer at MAV Tecnologia
Belo Horizonte - Brasil
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux