Re: Rebalancing

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



We filter out the non-actionable HEALTH_WARN's, e.g.:

ACTIONABLE_WARNINGS=$(ceph health detail | egrep 'backfill_toofull|incomplete')
HEALTH_FILTERED=$(ceph health detail | egrep -v 'backfilling|wait_backfill|recover|noscrub|nodeep-scrub|failing to respond to cache pressure|noout')

-- dan


On 25 Apr 2017, at 21:28, Aaron Bassett <Aaron.Bassett@xxxxxxxxxxxxx> wrote:

Wow that's a great slide deck! Wish I could have been at that talk. I think once my current rebalance is complete, I'll keep an eye on it and see if I need to adopt the cron'ed reweight-by-utilization approach.

I'm guessing, however, that you must not be using HEALTH_WARN as an alert situation, because you would expect your cluster to regularly be in warn state if it's continuously backfilling. What sorts of things would you alert on that would require human intervention? 

Aaron 

On Apr 25, 2017, at 2:42 PM, Dan Van Der Ster <daniel.vanderster@xxxxxxx> wrote:

Yes, that makes sense.

BTW, I'm not sure I ever shared it on the ML but here is a talk I presented at the OpenStack summit in Barcelona about these various reweighting scripts and our experience using them:  https://cernbox.cern.ch/index.php/s/0c3MJNsNo1YuFdy

Cheers, Dan

On 25 Apr 2017, at 20:35, David Turner <drakonstein@xxxxxxxxx> wrote:

That is definitely a much smaller shock to the cluster while add/removeng and balancing osds.  I've thought about adding functionality that would generate X number of crush maps between your current crush map and the goal crush map to cause smaller incremental changes over time.

The biggest factor is probably what your cluster use case is.  In mine, I can spend an entire weekend with max_backfills at 10 and nobody will notice as long as I put it back down before Monday morning.  That makes doing a massive CRUSH update on Friday evening very doable.  And when uploading a balanced CRUSH map generally brings one of my clusters to within 2% top to bottom, that's a pretty viable method.

On Tue, Apr 25, 2017 at 2:05 PM Dan Van Der Ster <daniel.vanderster@xxxxxxx> wrote:
We run this continuously -- in a cron every 2 hours -- on all of our clusters: https://github.com/cernceph/ceph-scripts/blob/master/tools/crush-reweight-by-utilization.py  
It's a misnomer, yes -- because my original plan was indeed to modify CRUSH weights but for some reason which I do not recall, I switch it to modify the reweights. It should be super easy to change the crush weight instead.
We run it with params to change weights of only 4 OSDs by 0.01 at a time. This ever so gradually flattens the PG distribution, and is totally transparent latency-wise.
BTW, it supports reweighting only below certain CRUSH buckets, which is essential if you have a non-uniform OSD tree.

New OSDs start with crush weight 0, then we gradually increase the weights 0.01 at a time, all the while watching the number of backfills and cluster latency.
The same script is used to gradually drain OSDs down to CRUSH weight 0.
We've used that second script to completely replace several petabytes of hardware.

Cheers, Dan


On 25 Apr 2017, at 08:22, Anthony D'Atri <aad@xxxxxxxxxxxxxx> wrote:

I read this thread with interest because I’ve been squeezing the OSD distirbution on several clusters mysel while expansion gear is in the pipline, ending up with an ugly mix of both types of reweight as well as temporarily raising the full and backfill full ratios.  

I’d been contemplating tweaking Dan@CERN’s reweighting script to use CRUSH reweighting instead, and to squeeze from both ends, though I fear it might not be as simple as it sounds prima fascia.


Aaron wrote:

Should I be expecting it to decide to increase some underutilized osds?


The osd reweight mechanism only accomodates an override weight between 0 and 1, thus it can decrease but not increase a given OSD’s fullness.  To directly fill up underfull OSD’s it would seem to to need an override weight > 1, which isn’t possible.

I haven’t personally experienced it (yet), but from what I read, if override reweighted OSD’s get marked out and back in again, their override will revert to 1.  In a case where a cluster is running close to the full ratio, this would *seem* as though a network glitch etc. might result in some OSD’s filling up and hitting the full threshold, which would be bad.

Using CRUSH reweight instead would seem to address both of these shortcomings, though it does perturb the arbitrary but useful way that initial CRUSH weights by default reflect the capacity of each OSD.  Various references  also indicate that the override reweight does not change the weight of buckets above the OSD, but that CRUSH reweight does.  I haven’t found any discussion of the ramifications of this, but my inital stab at it would be that when one does the 0-1 override reweight, the “extra’ data is redistributed to OSD’s on the same node.  CRUSH reweighting would then seem to pull / push the wad of data being adjusted from / to *other* OSD nodes.  Or it could be that I’m out of my Vulcan mind.

Thus adjusting the weight of a given OSD affects the fullness of other OSD’s, in ways that would seem to differ depending on which method is used.  As I think you implied in one of your messages, sometimes this can result in the fullness of one or more OSD’s climbing relatively sharply, even to a point distinctly above where the previous most-full OSDs were.

I lurked in the recent developer’s meeting where strategies for A Better Way in Luminous were discussed.  While the plans are exciting and hold promise for uniform and thus greater safe utilization of a cluster’s raw space, I suspect though that between dev/test time and the attrition needed to update running clients, those of us running existing RBD clusters won’t be able to take advantage of them for some time.

— Anthony


_______________________________________________
Ceph-large mailing list
Ceph-large@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-large-ceph.com

_______________________________________________
Ceph-large mailing list
Ceph-large@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-large-ceph.com

_______________________________________________
Ceph-large mailing list
Ceph-large@xxxxxxxxxxxxxx
https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.ceph.com_listinfo.cgi_ceph-2Dlarge-2Dceph.com&d=DwICAg&c=Tpa2GKmmYSmpYS4baANxQwQYqA0vwGXwkJOPBegaiTs&r=5nKer5huNDFQXjYpOR4o_7t5CRI8wb5Vb_v1pBywbYw&m=ByHXfjy6plTgaaQ5SofdKeiOgv-ZtIaR0k_JP7smIlU&s=8TYSmOhKs7mQEXUI8rHQiK-FDK98IgipsahB5TW5Jkw&e=

CONFIDENTIALITY NOTICE
This e-mail message and any attachments are only for the use of the intended recipient and may contain information that is privileged, confidential or exempt from disclosure under applicable law. If you are not the intended recipient, any disclosure, distribution or other use of this e-mail message or attachments is prohibited. If you have received this e-mail message in error, please delete and notify the sender immediately. Thank you.

_______________________________________________
Ceph-large mailing list
Ceph-large@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-large-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [XFS]

  Powered by Linux