Re: Increasing pg and pgs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Michael, Yes i did wait for the rebalance to complete.

Thanks
Paras.

On Wed, Oct 21, 2015 at 1:02 PM, Michael Hackett <mhackett@xxxxxxxxxx> wrote:
One thing I forgot to note Paras, If you are increasing the PG count on a pool by a large number you will want to increase the PGP value slowly and allow the cluster to rebalance the data instead of just setting the pgp-num to immediately reflect the pg-num. This will give you greater control over how much data is rebalancing in the cluster.

So for example if your pg-num is to be set to 2048 on a pool which has a current PG count set to 512, you could step up as follows:

ceph osd pool set data pgp_num 1024                     <------- Increase the hashing buckets gradually
Wait for cluster to finish rebalancing

ceph osd pool set data pgp_num 2048                     <------- Increase the hashing buckets gradually
Wait for cluster to finish rebalancing

Thank you,

----- Original Message -----
From: "Paras pradhan" <pradhanparas@xxxxxxxxx>
To: "Michael Hackett" <mhackett@xxxxxxxxxx>
Cc: ceph-users@xxxxxxxxxxxxxx
Sent: Wednesday, October 21, 2015 1:53:13 PM
Subject: Re: Increasing pg and pgs

Thanks!

On Wed, Oct 21, 2015 at 12:52 PM, Michael Hackett <mhackett@xxxxxxxxxx>
wrote:

> Hello Paras,
>
> You pgp-num should mirror your pg-num on a pool. pgp-num is what the
> cluster will use for actual object placement purposes.
>
> ----- Original Message -----
> From: "Paras pradhan" <pradhanparas@xxxxxxxxx>
> To: "Michael Hackett" <mhackett@xxxxxxxxxx>
> Cc: ceph-users@xxxxxxxxxxxxxx
> Sent: Wednesday, October 21, 2015 1:39:11 PM
> Subject: Re: Increasing pg and pgs
>
> Thanks Michael for the clarification. I should set the pg and pgp_num to
> all the pools . Am i right? . I am asking beacuse setting the pg to just
> only one pool already set the status to HEALTH OK.
>
>
> -Paras.
>
> On Wed, Oct 21, 2015 at 12:21 PM, Michael Hackett <mhackett@xxxxxxxxxx>
> wrote:
>
> > Hello Paras,
> >
> > This is a limit that was added pre-firefly to prevent users from knocking
> > IO off clusters for several seconds when PG's are being split in existing
> > pools. This limit is not called into effect when creating new pools
> though.
> >
> > If you try and limit the number to
> >
> > # ceph osd pool set rbd pg_num 1280
> >
> > This should go fine as this will be at the 32 PG per OSD limit in the
> > existing pool.
> >
> > This limit is set when expanding PG's on an existing pool because splits
> > are a little more expensive for the OSD, and have to happen synchronously
> > instead of asynchronously.
> >
> > I believe Greg covered this in a previous email thread:
> >
> http://lists.ceph.com/pipermail/ceph-users-ceph.com/2014-July/041399.html
> >
> > Thanks,
> >
> > ----- Original Message -----
> > From: "Paras pradhan" <pradhanparas@xxxxxxxxx>
> > To: ceph-users@xxxxxxxxxxxxxx
> > Sent: Wednesday, October 21, 2015 12:31:57 PM
> > Subject: Increasing pg and pgs
> >
> > Hi,
> >
> > When I check ceph health I see "HEALTH_WARN too few pgs per osd (11 < min
> > 20)"
> >
> > I have 40osds and tried to increase the pg to 2000 with the following
> > command. It says creating 1936 but not sure if it is working or not. Is
> > there a way to check the progress? It has passed more than 48hrs and I
> > still see the health warning.
> >
> > --
> >
> >
> > root@node-30:~# ceph osd pool set rbd pg_num 2000
> >
> > Error E2BIG: specified pg_num 2000 is too large (creating 1936 new PGs on
> > ~40 OSDs exceeds per-OSD max of 32)
> >
> > --
> >
> >
> >
> >
> > Thanks in advance
> >
> > Paras.
> >
> >
> >
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@xxxxxxxxxxxxxx
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
> > --
> > Michael Hackett
> > Software Maintenance Engineer CEPH Storage
> > Phone: 1-978-399-2196
> > Westford, MA
> >
> >
>
> --
> Michael Hackett
> Software Maintenance Engineer CEPH Storage
> Phone: 1-978-399-2196
> Westford, MA
>
> Hello
>

--
Michael Hackett
Software Maintenance Engineer CEPH Storage
Phone: 1-978-399-2196
Westford, MA


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux