Hi all, I am trying to find a simple way that might help me better distribute my data, as I wrap up my Nautilus upgrades. Currently rebuilding some OSD's with bigger block.db to prevent BlueFS spillover where it isn't difficult to do so, and I'm once again struggling with unbalanced distribution, despite having used upmap balancer. I recently discovered that my previous usage of the balancer module with crush-compat mode before the upmap mode has left some lingering compat weight sets, which I believe may account for my less than stellar distribution, as I now have 2-3 weightings fighting against each other (upmap balancer, compat weight set, reweight). Below is a snippet showing the compat differing.
So my main question is how do I [re]set the compat value, to match the weight, so that the upmap balancer can more precisely balance the data? It looks like I may have two options, with ceph osd crush weight-set reweight-compat {name} {weight} or ceph osd crush weight-set rm-compat I assume the first would be to manage a single device/host/chassis/etc and the latter would nuke all compat values across the board? And in looking at this, I started poking at my tunables, and I have no clue how to interpret the values, nor what I believe what they should be.
This is a Jewel -> Luminous -> Mimic -> Nautilus cluster, and pretty much all the clients support Jewel/Luminous+ feature sets (jewel clients are kernel-cephfs clients, even though recent (4.15-4.18) kernels).
I feel like I should be running optimal tunables, but I believe I am running default? Not sure how much of a difference exists there, or if that will trigger a bunch of data movement either. Hopefully someone will be able to steer me in a positive direction here, and I can mostly trigger a single, large data movement and return to a happy, balanced cluster once again. Thanks, Reed |
Attachment:
smime.p7s
Description: S/MIME cryptographic signature
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com