Re: Many misplaced PG's, full OSD's and a good amount of manual intervention to keep my Ceph cluster alive.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



>> Do you use the autoscalar or did you trigger a manual PG increment of the
>> pool?
> 
> The pool had autoscale enabled until 2 days ago when I thought it was
> better to change things manually in order to have a more deterministic
> result. Yes, I wanted to increase from "1" to something like "1024" but it
> looks like it was capped to the 144 no matter what I do:

`ceph osd df`, look at the PGS column.  Could be you’re hitting the limit on some OSDs.  It’s odd to stop at a non power of 2. 

> Is it correct to say that every PG/OSD change can potentially cause data
> misplacements, unbalanced osd's and long backfills? I'll be way more
> careful before tuning it if that's the case.

The autoscaler will usually only bump pg_num for a pool when the value is <half what it thinks it should be. 

I suggest setting the ‘bulk’ flag one pool at a time to effectively pre-split PGs as if the pool were full of data already.   Ignore .mgr.  Start with the pools with the fewest , let the cluster settle between each adjustment.  That way the autoscaler will only make changes if cluster topology changes.  


> Thank you both so much! It definitely helped me to understand Ceph better.
> It is kind of a steep curve :).

We’re a community!   That curve is way gentler than it used to be.  
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux