On Tue, May 9, 2017 at 6:01 PM, Webert de Souza Lima <webert.boss@xxxxxxxxx> wrote: > Hello all, > > I'm been using cephfs for a while but never really evaluated its > performance. > As I put up a new ceph cluster, I though that I should run a benchmark to > see if I'm going the right way. > > By the results I got, I see that RBD performs a lot better in comparison to > cephfs. > > The cluster is like this: > - 2 hosts with one SSD OSD each. > this hosts have 2 pools: cephfs_metadata and cephfs_cache (for cache > tiering). > - 3 hosts with 5 HDD OSDs each. > this hosts have 1 pool: cephfs_data. > > all details, cluster set up and results can be seen here: > https://justpaste.it/167fr > > I created the RBD pools the same way as the CEPHFS pools except for the > number of PGs in the data pool. > > I wonder why that difference or if I'm doing something wrong. Hmm, to understand this better I would start by taking cache tiering out of the mix, it adds significant complexity. The "-direct=1" part could be significant here: when you're using RBD, that's getting handled by ext4, and then ext4 is potentially still benefiting from some caching at the ceph layer. With CephFS on the other hand, it's getting handled by CephFS, and CephFS will be laboriously doing direct access to OSD. John > > Regards, > > Webert Lima > DevOps Engineer at MAV Tecnologia > Belo Horizonte - Brasil > > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com