Re: Autoscale recommendtion seems to small + it broke my pool...

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks Eugen

On 22/06/2020 10:27 pm, Eugen Block wrote:
Regarding the inactive PGs, how are your pools configured? Can you share

ceph osd pool ls detail

It could be an issue with min_size (is it also set to 3?).


pool 2 'ceph' replicated size 3 min_size 1 crush_rule 0 object_hash rjenkins pg_num 512 pgp_num 512 autoscale_mode warn last_change 8516 lfor 0/5823/5807 flags hashpspool,selfmanaged_snaps stripe_width 0 compression_algorithm lz4 compression_mode aggressive application rbd
        removed_snaps [1~3,5~2]

Autoscale is no longer recommending a pg_num change (I have it set to warn). Back when this happened I was still setting the ceph custer up - incrementally moving VM's to the ceph pool and adding OSD's as space was freed up on the old storage (lizardfs), so not an ideal setup :) for a while the pool was in a constant state of inbalance and redistribution, I probably should have disabled the autoscale until I finished.


Finished now:

POOL   SIZE TARGET SIZE RATE RAW CAPACITY  RATIO TARGET RATIO BIAS PG_NUM NEW PG_NUM AUTOSCALE ceph  3230G              3.0       34912G 0.2776               1.0    512            warn

--
Lindsay
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux