hi there,
trying to get around my head
rocksdb spillovers and how to
deal with
them … in particular, i have one
osds which does not have any
pools
associated (as per ceph pg
ls-by-osd $osd ), yet it does
show up in ceph
health detail as:
osd.$osd spilled over 2.9
MiB metadata from 'db' device
(49 MiB
used of 37 GiB) to slow device
compaction doesn't help. i am
well aware of
https://tracker.ceph.com/issues/38745
, yet find it really
counter-intuitive that an empty
osd with a more-or-less optimal
sized db
volume can't fit its rockdb on
the former.
is there any way to repair this,
apart from re-creating the osd?
fwiw,
dumping the database with
ceph-kvstore-tool bluestore-kv
/var/lib/ceph/osd/ceph-$osd dump
>
bluestore_kv.dump
yields a file of less than 100mb
in size.
and, while we're at it, a few
more related questions:
- am i right to assume that the
leveldb and rocksdb arguments to
ceph-kvstore-tool are only
relevant for osds with
filestore-backend?
- does ceph-kvstore-tool
bluestore-kv … also deal with
rocksdb-items for
osds with bluestore-backend?
thank you very much & with
kind regards,
thoralf.
_______________________________________________
ceph-users mailing list --
ceph-users@xxxxxxx
To unsubscribe send an email to
ceph-users-leave@xxxxxxx