Re: pool pgp_num not updated

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



What is the current cluster status, is it healthy? Maybe increasing pg_num would hit the limit of mon_max_pg_per_osd? Can you share 'ceph -s' output?


Zitat von Mac Wynkoop <mwynkoop@xxxxxxxxxxxx>:

Right, both Norman and I set the pg_num before the pgp_num. For example,
here is my current pool settings:


*"pool 40 '*redacted*.rgw.buckets.data' erasure size 9 min_size 7
crush_rule 2 object_hash rjenkins pg_num 2048 pgp_num 1024 pgp_num_target
2048 last_change 8458830 lfor 0/0/8445757 flags
hashpspool,ec_overwrites,nodelete,backfillfull stripe_width 24576 fast_read
1 application rgw"*
So, when I set:

 "*ceph osd pool set hou-ec-1.rgw.buckets.data pgp_num 2048*"

it returns:

"*set pool 40 pgp_num to 2048*"

But upon checking the pool details again:

"*pool 40 '*redacted*.rgw.buckets.data' erasure size 9 min_size 7
crush_rule 2 object_hash rjenkins pg_num 2048 pgp_num 1024 pgp_num_target
2048 last_change 8458870 lfor 0/0/8445757 flags
hashpspool,ec_overwrites,nodelete,backfillfull stripe_width 24576 fast_read
1 application rgw*"

and the pgp_num value does not increase. Am I just doing something
totally wrong?

Thanks,
Mac Wynkoop




On Tue, Oct 6, 2020 at 2:32 PM Marc Roos <M.Roos@xxxxxxxxxxxxxxxxx> wrote:

pg_num and pgp_num need to be the same, not?

3.5.1. Set the Number of PGs

To set the number of placement groups in a pool, you must specify the
number of placement groups at the time you create the pool. See Create a
Pool for details. Once you set placement groups for a pool, you can
increase the number of placement groups (but you cannot decrease the
number of placement groups). To increase the number of placement groups,
execute the following:

ceph osd pool set {pool-name} pg_num {pg_num}

Once you increase the number of placement groups, you must also increase
the number of placement groups for placement (pgp_num) before your
cluster will rebalance. The pgp_num should be equal to the pg_num. To
increase the number of placement groups for placement, execute the
following:

ceph osd pool set {pool-name} pgp_num {pgp_num}


https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/4/html/storage_strategies_guide/placement_groups_pgs

-----Original Message-----
To: norman
Cc: ceph-users
Subject:  Re: pool pgp_num not updated

Hi everyone,

I'm seeing a similar issue here. Any ideas on this?
Mac Wynkoop,



On Sun, Sep 6, 2020 at 11:09 PM norman <norman.kern@xxxxxxx> wrote:

> Hi guys,
>
> When I update the pg_num of a pool, I found it not worked(no
> rebalanced), anyone know the reason? Pool's info:
>
> pool 21 'openstack-volumes-rs' replicated size 3 min_size 2 crush_rule
> 21 object_hash rjenkins pg_num 1024 pgp_num 512 pgp_num_target 1024
> autoscale_mode warn last_change 85103 lfor 82044/82044/82044 flags
> hashpspool,nodelete,selfmanaged_snaps stripe_width 0 application rbd
>          removed_snaps
> [1~1e6,1e8~300,4e9~18,502~3f,542~11,554~1a,56f~1d7]
> pool 22 'openstack-vms-rs' replicated size 3 min_size 2 crush_rule 22
> object_hash rjenkins pg_num 512 pgp_num 512 pg_num_target 256
> pgp_num_target 256 autoscale_mode warn last_change 84769 lfor
> 0/0/55294 flags hashpspool,nodelete,selfmanaged_snaps stripe_width 0
> application rbd
>
> The pgp_num_target is set, but pgp_num not set.
>
> I have scale out new OSDs and is backfilling before setting the value,

> is it the reason?
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an
> email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an
email to ceph-users-leave@xxxxxxx



_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux