Re: Increasing pg and pgs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello Paras,

You pgp-num should mirror your pg-num on a pool. pgp-num is what the cluster will use for actual object placement purposes.

----- Original Message -----
From: "Paras pradhan" <pradhanparas@xxxxxxxxx>
To: "Michael Hackett" <mhackett@xxxxxxxxxx>
Cc: ceph-users@xxxxxxxxxxxxxx
Sent: Wednesday, October 21, 2015 1:39:11 PM
Subject: Re:  Increasing pg and pgs

Thanks Michael for the clarification. I should set the pg and pgp_num to
all the pools . Am i right? . I am asking beacuse setting the pg to just
only one pool already set the status to HEALTH OK.


-Paras.

On Wed, Oct 21, 2015 at 12:21 PM, Michael Hackett <mhackett@xxxxxxxxxx>
wrote:

> Hello Paras,
>
> This is a limit that was added pre-firefly to prevent users from knocking
> IO off clusters for several seconds when PG's are being split in existing
> pools. This limit is not called into effect when creating new pools though.
>
> If you try and limit the number to
>
> # ceph osd pool set rbd pg_num 1280
>
> This should go fine as this will be at the 32 PG per OSD limit in the
> existing pool.
>
> This limit is set when expanding PG's on an existing pool because splits
> are a little more expensive for the OSD, and have to happen synchronously
> instead of asynchronously.
>
> I believe Greg covered this in a previous email thread:
> http://lists.ceph.com/pipermail/ceph-users-ceph.com/2014-July/041399.html
>
> Thanks,
>
> ----- Original Message -----
> From: "Paras pradhan" <pradhanparas@xxxxxxxxx>
> To: ceph-users@xxxxxxxxxxxxxx
> Sent: Wednesday, October 21, 2015 12:31:57 PM
> Subject:  Increasing pg and pgs
>
> Hi,
>
> When I check ceph health I see "HEALTH_WARN too few pgs per osd (11 < min
> 20)"
>
> I have 40osds and tried to increase the pg to 2000 with the following
> command. It says creating 1936 but not sure if it is working or not. Is
> there a way to check the progress? It has passed more than 48hrs and I
> still see the health warning.
>
> --
>
>
> root@node-30:~# ceph osd pool set rbd pg_num 2000
>
> Error E2BIG: specified pg_num 2000 is too large (creating 1936 new PGs on
> ~40 OSDs exceeds per-OSD max of 32)
>
> --
>
>
>
>
> Thanks in advance
>
> Paras.
>
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
> --
> Michael Hackett
> Software Maintenance Engineer CEPH Storage
> Phone: 1-978-399-2196
> Westford, MA
>
>

-- 
Michael Hackett 
Software Maintenance Engineer CEPH Storage 
Phone: 1-978-399-2196 
Westford, MA 

Hello 
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux