On Wed, Oct 2, 2019 at 10:56 PM Robert LeBlanc <robert@xxxxxxxxxxxxx> wrote: > Is there a way to have leveldb compact more frequently or cause it to > come up for air more frequently and respond to heartbeats and process > some IO? you can manually trigger a compaction via the admin socket (or was it via ceph tell?) with the compact command, but I don't think that this helps with your workload. > I thought splitting PGs would help, but we are still seeing > the problem (previously ~20 PGs per OSD to now ~150). I still have > some space on the SSDs that I can double, almost triple the journal, > but not sure if that will help in this situation. no, a larger journal will not help for leveldb workloads. Big difference between FileStore journals and BlueStore DB devices: BlueStore actually puts all the metadata onto the SSD permanently, a FileStore journal is just a journal (and 5 GB is large enough, it won't use that much space for small operations like deletions). (The answer that you don't want to hear is probably the best way forward: upgrade to BlueStore) Paul > > The other issue I'm seeing is that some IO just gets stuck when the > OSDs are getting marked down and coming back through the cluster. > > Thanks, > Robert LeBlanc > > ---------------- > Robert LeBlanc > PGP Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2 FA62 B9F1 > _______________________________________________ > ceph-users mailing list -- ceph-users@xxxxxxx > To unsubscribe send an email to ceph-users-leave@xxxxxxx _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx