performance of list omap

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi developers:

    Always, we may see some slow requests of list omap, some even will
cause the op thread suicide。
recently, we introduced the rocksdb delete range api, but found the
cluster be more easier complaining
list omap timed out. I checked the rocksdb of one osd, and I think I
found the reason.

    the omap iterator when listing omap use the tail of '~', when the
iterator moved to the last key of the omaps
we wanted, we will try to call extra next(), usually this will be
another object's omap header(with '-'). *IF*
there are some deleted key or tombstones, rocksdb will fall in the
loop of `FindNextUserEntryInternal` until
find a valid key, so it will travels all dead key in mid and read the
sst file heavily.

    I think there may be 3 approachs:
1) change the omap header from '-' to '~', let it play role of end when iterate.
2) force to add the omap end key(use '~') on metadata pool.
3) when iterate, found the rbegin of omap key, then we have the end
key, and could avoid extra next().

    making sense?




[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux