Den mån 17 mars 2025 kl 14:48 skrev Joshua Baergen <jbaergen@xxxxxxxxxxxxxxxx>: > Hey Brian, > > The setting you're looking for is bluefs_buffered_io. This is very > much a YMMV setting, so it's best to test with both modes, but I > usually recommend turning it off for all but omap-intensive workloads > (e.g. RGW index) due to it causing writes to tend to be split up into > smaller pieces. On the other hand, having to set bluestore cache sizes for each OSD individually is kind of weird in 2025. If I initially had 8 OSDs in a box, then two drives died, I would want the computer to let the remaining 6 OSDs use the extra available cache memory if it can, and not that I would have to edit configs for the remaining 6, then possibly once more if I ever replace the two lost OSDs. At the same time, after losing x OSDs, I would find it wasteful to not use memory I have bought in order to have good caches for my OSDs because it is a static setting. Even a single "use 110G ram as you see fit, split between the current OSDs" for a 128G machine would be better than a per-OSD bluestore_cache_size = xyz setting. > On Sun, Mar 16, 2025 at 2:38 PM Brian Marcotte <marcotte@xxxxxxxxx> wrote: > > > > Some years ago when first switching to Bluestore, I could see that > > ceph-osd wasn't using the host page cache anymore. Some time later after > > a Ceph upgrade, I found that ceph-osd was now filling the page cache. I'm > > sorry I don't remember which upgrade that was. Currently I'm running > > pacific and reef clusters. > > > > Should ceph-osd (Bluestore) be going through the page cache? Can ceph-osd > > be configured to go direct? -- May the most significant bit of your life be positive. _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx