Re: Increasing pg_num

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

On Mon, 16 May 2016 22:40:47 +0200 (CEST) Wido den Hollander wrote:

> 
> > Op 16 mei 2016 om 7:56 schreef Chris Dunlop <chris@xxxxxxxxxxxx>:
> > 
> > 
> > Hi,
> > 
> > I'm trying to understand the potential impact on an active cluster of
> > increasing pg_num/pgp_num.
> > 
> > The conventional wisdom, as gleaned from the mailing lists and general
> > google fu, seems to be to increase pg_num followed by pgp_num, both in
> > small increments, to the target size, using "osd max backfills" (and
> > perhaps "osd recovery max active"?) to control the rate and thus
> > performance impact of data movement.
> > 
> > I'd really like to understand what's going on rather than "cargo
> > culting" it.
> > 
> > I'm currently on Hammer, but I'm hoping the answers are broadly
> > applicable across all versions for others following the trail.
> > 
> > Why do we have both pg_num and pgp_num? Given the docs say "The pgp_num
> > should be equal to the pg_num": under what circumstances might you want
> > these different, apart from when actively increasing pg_num first then
> > increasing pgp_num to match? (If they're supposed to be always the
> > same, why not have a single parameter and do the "increase pg_num,
> > then pgp_num" within ceph's internals?)
> > 
> 
> pg_num is the actual amount of PGs. This you can increase without any
> actual data moving.
>
Yes and no.

Increasing the pg_num will split PGs, which causes potentially massive I/O.
Also AFAIK that I/O isn't regulated by the various recovery and backfill
parameters.
That's probably why recent Ceph versions will only let you increase pg_num
in smallish increments. 

Moving data (as in redistributing amongst the OSD based on CRUSH) will
indeed not happen until pgp_num is also increased. 

 
Regards,

Christian
-- 
Christian Balzer        Network/Systems Engineer                
chibi@xxxxxxx   	Global OnLine Japan/Rakuten Communications
http://www.gol.com/
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux