Re: Adjust PG PGP placement groups on the fly

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Vasu,

Thank you for your input! I was very hesitant in changing those on a live system.
As I understand I don’t need to wait for a cluster to re-balance between PG and PGP commands, right?

Regards,

Andrey Ptashnik


From: Vasu Kulkarni <vakulkar@xxxxxxxxxx>
Date: Friday, November 4, 2016 at 12:00 PM
To: Andrey Ptashnik <APtashnik@xxxxxxxxx>
Cc: "ceph-users@xxxxxxxxxxxxxx" <ceph-users@xxxxxxxxxxxxxx>
Subject: Re: [ceph-users] Adjust PG PGP placement groups on the fly

from the docs (also important to read what pgp_num does): http://docs.ceph.com/docs/jewel/rados/operations/placement-groups/

To set the number of placement groups in a pool, you must specify the number of placement groups at the time you create the pool. See Create a Pool for details. Once you’ve set placement groups for a pool, you may increase the number of placement groups (but you cannot decrease the number of placement groups). To increase the number of placement groups, execute the following:

ceph osd pool set {pool-name} pg_num {pg_num}


Once you increase the number of placement groups, you must also increase the number of placement groups for placement (pgp_num) before your cluster will rebalance. The pgp_num will be the number of placement groups that will be considered for placement by the CRUSH algorithm. Increasing pg_num splits the placement groups but data will not be migrated to the newer placement groups until placement groups for placement, ie. pgp_num is increased. The pgp_num should be equal to the pg_num. To increase the number of placement groups for placement, execute the following

ceph osd pool set {pool-name} pgp_num {pgp_num}


On Fri, Nov 4, 2016 at 9:52 AM, Andrey Ptashnik <APtashnik@xxxxxxxxx> wrote:
Hello Ceph team,

Is it possible to increase number of placement groups on a live system without any issues and data loss? If so what is the correct sequence of steps?

Regards,

Andrey Ptashnik

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux