Re: Recent ceph.io Performance Blog Posts

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 11/8/22 21:20, Mark Nelson wrote:
Hi Folks,

I thought I would mention that I've released a couple of performance articles on the Ceph blog recently that might be of interest to people:

For sure, thanks a lot, it's really informative!

Can we also ask for special requests? One of the things that would help us (and CephFS users in general) is how performance of CephFS for small files (~512 bytes, 2k up to say 64K) is impacted by the amount of PGs a CephFS metadata pool has.

Question that might be answered:

- does it help to provision more PGs for workloads that rely heavily on OMAP usage by the MDS (or is RocksDB the bottleneck in all cases)?

Tests that might be useful:

- rsync (single threaded, worst case)
- fio random read / write tests with varying io depths and threads
- The CephFS devs might know some performance tests in this context

One of the tricky things with doing these benchmarks, is that the PG placement over the OSDs might heavily impact performance all by itself, as primary PGs are not placed in the same way with different amount of PGs in the pool. Therefore, ideally, the primaries are balanced as evenly possible. I'm eagerly awaiting the Ceph virtual 2022 talk "New workload balancer in Ceph". Having the primaries balanced before these benchmarks run seems to be a prerequisite to do a "apples to apples" comparison.

Gr. Stefan
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux