Re: Ceph 16.2.x mon compactions, disk writes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]


Thank you, Anthony. As I explained to you earlier, the article you had sent
is about RocksDB tuning for Bluestore OSDs, while the issue at hand is not
with OSDs but rather monitors and their RocksDB store. Indeed, the drives
are not enterprise-grade, but their specs exceed Ceph hardware
recommendations by a good margin, they're being used as boot drives only
and aren't supposed to be written to continuously at high rates - which is
what unfortunately is happening. I am trying to determine why it is
happening and how the issue can be alleviated or resolved, unfortunately
monitor RocksDB usage and tunables appear to be not documented at all.


On Fri, 13 Oct 2023 at 20:11, Anthony D'Atri <anthony.datri@xxxxxxxxx>

> cf. Mark's article I sent you re RocksDB tuning.  I suspect that with Reef
> you would experience fewer writes.  Universal compaction might also help,
> but in the end this SSD is a client SKU and really not suited for
> enterprise use.  If you had the 1TB SKU you'd get much longer life, or you
> could change the overprovisioning on the ones you have.
> On Oct 13, 2023, at 12:30, Zakhar Kirpichenko <zakhar@xxxxxxxxx> wrote:
> I would very much appreciate it if someone with a better understanding of
> monitor internals and use of RocksDB could please chip in.
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]

  Powered by Linux