On 11/9/22 6:03 AM, Eshcar Hillel wrote:
Hi Mark,
Thanks for posting these blogs. They are very interesting to read.
Maybe you have an answer to a question I asked in the dev list:
We run fio benchmark against a 3-node ceph cluster with 96 OSDs.
Objects are 4kb. We use
gdbpmp profilerhttps://github.com/markhpc/gdbpmp
<https://github.com/markhpc/gdbpmp> to analyze the threads' performance.
we discovered the bstore_kv_sync thread is always busy, while all 16
tp_osd_tp threads are not busy most of the time (wait on a conditional
variable or a lock).
Given that 3 rocksdb CFs are sharded, and sharding is configurable,
why not run multiple (3) bstore_kv_sync threads? they won't have
conflicts most of the time.
This has the potential of removing the rocksdb bottleneck and
increasing IOPS.
Can you explain this design choice?
You are absolutely correct that the bstore_kv_sync thread can often be a
bottleneck during 4K random writes. Typically it's not so bad that the
tp_osd_tp threads are mostly blocked though (feel free to send me a copy
of the trace, I would be interested in seeing it). Years ago I
advocated for the same approach you are suggesting here. The fear at
the time was that the changes inside bluestore would be too disruptive.
The column family sharding approach could be (and was) mostly contained
to the KeyValueDB glue code. Column family sharding has been a win from
the standpoint that it helps us avoid really deep LSM hierarchies in
RocksDB. We tend to see faster compaction times and are more likely to
keep full levels on the fast device. Sadly it doesn't really help with
improving metadata throughput and may even introduce a small amount of
overhead during the WAL flush process. FWIW slow bstore_kv_sync is one
of the reasons that people will some times run multiple OSDs on one NVMe
drive (sometimes it's faster, sometimes it's not).
Maybe a year ago I tried to sort of map out the changes that I thought
would be necessary to shard across KeyValueDBs inside bluestore itself.
It didn't look impossible, but would require quite a bit of work (and a
bit of finesse to restructure the data path). There's a legitimate
questions of whether or not it's worth it now to make those kinds of
changes to bluestore or invest in crimson and seastore at this point.
We ended up deciding not to pursue the changes back then. I think if we
changed our minds it would most likely go into some kind of experimental
bluestore v2 project (along with other things like hierarchical storage)
so we don't screw up the existing code base.
------------------------------------------------------------------------
*From:* Mark Nelson <mnelson@xxxxxxxxxx>
*Sent:* Tuesday, November 8, 2022 10:20 PM
*To:* ceph-users@xxxxxxx <ceph-users@xxxxxxx>
*Subject:* Recent ceph.io Performance Blog Posts
CAUTION: External Sender
Hi Folks,
I thought I would mention that I've released a couple of performance
articles on the Ceph blog recently that might be of interest to people:
1.
https://ceph.io/en/news/blog/2022/rocksdb-tuning-deep-dive/
<https://ceph.io/en/news/blog/2022/rocksdb-tuning-deep-dive/>
2.
https://ceph.io/en/news/blog/2022/qemu-kvm-tuning/
<https://ceph.io/en/news/blog/2022/qemu-kvm-tuning/>
3.
https://ceph.io/en/news/blog/2022/ceph-osd-cpu-scaling/
<https://ceph.io/en/news/blog/2022/ceph-osd-cpu-scaling/>
The first covers RocksDB tuning. How we arrived at our defaults, an
analysis of some common settings that have been floating around on the
mailing list, and potential new settings that we are considering making
default in the future.
The second covers how to tune QEMU/KVM with librbd to achieve high
single-client performance on a small (30 OSD) NVMe backed cluster. This
article also covers the performance impact when enabling 128bit AES
over-the-wire encryption.
The third covers per-OSD CPU/Core scaling and the kind of IOPS/core and
IOPS/NVMe numbers that are achievable both on a single OSD and on a
larger (60 OSD) NVMe cluster. In this case there are enough clients and
a high enough per-client iodepth to saturate the OSD(s).
I hope these are helpful or at least interesting!
Thanks,
Mark
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx