Yeah thank you xD you just answered another thread where i asked for the kv-sync thread And consider this done so i know what to do now. Thank you Am 12.03.19 um 14:43 schrieb Mark Nelson: > Our default of 4 256MB WAL buffers is arguably already too big. On one > hand we are making these buffers large to hopefully avoid short lived > data going into the DB (pglog writes). IE if a pglog write comes in and > later a tombstone invalidating it comes in, we really want those to land > in the same WAL log to avoid that write being propagated into the DB. > On the flip side, large buffers mean that there's more work that rocksdb > has to perform to compare keys to get everything ordered. This is done > in the kv_sync_thread where we often bottleneck on small random write > workloads: > > > | | | | | | + 13.30% > rocksdb::InlineSkipList<rocksdb::MemTableRep::KeyComparator > const&>::Insert<false> > > So on one hand we want large buffers to avoid short lived data going > into the DB, and on the other hand we want small buffers to avoid large > amounts of comparisons eating CPU, especially in CPU limited environments. > > > Mark > > > > On 3/12/19 8:25 AM, Benjamin Zapiec wrote: >> May I configure the size of WAL to increase block.db usage? >> For example I configure 20GB I would get an usage of about 48GB on L3. >> >> Or should I stay with ceph defaults? >> Is there a maximal size for WAL that makes sense? >> >> >> _______________________________________________ >> ceph-users mailing list >> ceph-users@xxxxxxxxxxxxxx >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Attachment:
signature.asc
Description: OpenPGP digital signature
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com