[no subject]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



One small observation:
Iâ??ve noticed that 'ceph osd pool ls detail |grep cephfs.cephfs01.dataâ?? has pg_num increased but the pgp_num is still the same.
You will need to set it as well for data migration to new pgs to happen: https://docs.ceph.com/en/mimic/rados/operations/placement-groups/#set-the-number-of-placement-groups


Best,
Laimis J.

> On 5 Jan 2025, at 16:11, Anthony D'Atri <anthony.datri@xxxxxxxxx> wrote:
> 
> 
>>> What reweighs have been set for the top OSDs (ceph osd df tree)?
>>> 
>> Right now they are all at 1.0. I had to lower them to something close to
>> 0.2 in order to free up space but I changed them back to 1.0. Should I
>> lower them while the backfill is happening?
> 
> Old-style legacy override reweights donâ??t mesh well with the balancer.   Best to leave them at 1.00.  
> 
> 0.2 is pretty extreme, back in the day I rarely went below 0.8.   
> 
>>> ```
>>> "optimize_result": "Too many objects (0.355160 > 0.050000) are misplaced;
>>> try again late
>>> ```
> 
> That should clear.  The balancer doesnâ??t want to stir up trouble if the cluster already has a bunch of backfill / recovery going on.  Patience!
> 
>>> default.rgw.buckets.data    10  1024  197 TiB  133.75M  592 TiB  93.69
>>>   13 TiB
>>> default.rgw.buckets.non-ec  11    32   78 MiB    1.43M   17 GiB   
> 
> Thatâ??s odd that the data pool is that full but the others arenâ??t.  
> 
> Please send `ceph osd crush rule dump `.  And `ceph osd dump | grep pool`
> 
> 
>>> 
>>> I also tried changing the following but it does not seem to persist:
> 
> Could be an mclock thing.  
> 
>>> 1. Why I ended up with so many misplaced PG's since there were no changes
>>> on the cluster: number of osd's, hosts, etc.
> 
> Probably a result of the autoscaler splitting PGs or of some change to CRUSH rules such that some data canâ??t be placed.
> 
>>> 2. Is it ok to change the target_max_misplaced_ratio to something higher
>>> than .05 so the autobalancer would work and I wouldn't have to constantly
>>> rebalance the osd's manually?
> 
> I wouldnâ??t, thatâ??s a symptom not the disease.  
>>> Bruno
>>> _______________________________________________
>>> ceph-users mailing list -- ceph-users@xxxxxxx
>>> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>>> 
>>> 
>>> 
>>> 
>> 
>> --
>> Bruno Gomes Pessanha
>> _______________________________________________
>> ceph-users mailing list -- ceph-users@xxxxxxx
>> To unsubscribe send an email to ceph-users-leave@xxxxxxx

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux