Re: Many misplaced PG's, full OSD's and a good amount of manual intervention to keep my Ceph cluster alive.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



>> What reweighs have been set for the top OSDs (ceph osd df tree)?
>> 
> Right now they are all at 1.0. I had to lower them to something close to
> 0.2 in order to free up space but I changed them back to 1.0. Should I
> lower them while the backfill is happening?

Old-style legacy override reweights don’t mesh well with the balancer.   Best to leave them at 1.00.  

0.2 is pretty extreme, back in the day I rarely went below 0.8.   

>> ```
>> "optimize_result": "Too many objects (0.355160 > 0.050000) are misplaced;
>> try again late
>> ```

That should clear.  The balancer doesn’t want to stir up trouble if the cluster already has a bunch of backfill / recovery going on.  Patience!

>> default.rgw.buckets.data    10  1024  197 TiB  133.75M  592 TiB  93.69
>>    13 TiB
>> default.rgw.buckets.non-ec  11    32   78 MiB    1.43M   17 GiB   

That’s odd that the data pool is that full but the others aren’t.  

Please send `ceph osd crush rule dump `.  And `ceph osd dump | grep pool`


>> 
>> I also tried changing the following but it does not seem to persist:

Could be an mclock thing.  

>> 1. Why I ended up with so many misplaced PG's since there were no changes
>> on the cluster: number of osd's, hosts, etc.

Probably a result of the autoscaler splitting PGs or of some change to CRUSH rules such that some data can’t be placed.

>> 2. Is it ok to change the target_max_misplaced_ratio to something higher
>> than .05 so the autobalancer would work and I wouldn't have to constantly
>> rebalance the osd's manually?

I wouldn’t, that’s a symptom not the disease.  
>> Bruno
>> _______________________________________________
>> ceph-users mailing list -- ceph-users@xxxxxxx
>> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>> 
>> 
>> 
>> 
> 
> --
> Bruno Gomes Pessanha
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux