That is super interesting regarding scrubbing. I would have expected
that to be affected as well. Any chance you can check and see if there
is any correlation between rocksdb compaction events, snap trimming, and
increased disk reads? Also (Sorry if you already answered this) do we
know for sure that it's hitting the block.db/block.wal device? I
suspect it is, just wanted to verify.
Mark
On 8/7/20 9:04 AM, Manuel Lausch wrote:
Hi Mark,
The read IOPs in "normal" operation was with bluefs_buffered_io=false
somewhat about 1. And now with true around 2. So this seems slightly
higher, but far away from any problem.
While snapshot trimming the difference is enormous.
with false: around 200
with true: around 10
scrubing read IOPs do not appear to be affected. They are around 100
IOPs
I'am using librados to access my objects. So I don't know if this would
be any different with rgw.
Manuel
On Fri, 7 Aug 2020 08:08:40 -0500
Mark Nelson <mnelson@xxxxxxxxxx> wrote:
It's quite possible that the issue is really about rocksdb living on
top of bluefs with bluefs_buffered_io and rgw causing a ton of OMAP
traffic. rgw is the only case so far where the issue has shown up,
but it was significant enough that we didn't feel like we could leave
bluefs_buffered_io enabled. In your case with a 14GB target per OSD,
do you still see significantly increased disk reads with
blufs_buffered_io=false?
Mark
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx