Re: pool pgp_num not updated

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi everyone,

I'm seeing a similar issue here. Any ideas on this?
Mac Wynkoop,



On Sun, Sep 6, 2020 at 11:09 PM norman <norman.kern@xxxxxxx> wrote:

> Hi guys,
>
> When I update the pg_num of a pool, I found it not worked(no
> rebalanced), anyone know the reason? Pool's info:
>
> pool 21 'openstack-volumes-rs' replicated size 3 min_size 2 crush_rule
> 21 object_hash rjenkins pg_num 1024 pgp_num 512 pgp_num_target 1024
> autoscale_mode warn last_change 85103 lfor 82044/82044/82044 flags
> hashpspool,nodelete,selfmanaged_snaps stripe_width 0 application rbd
>          removed_snaps [1~1e6,1e8~300,4e9~18,502~3f,542~11,554~1a,56f~1d7]
> pool 22 'openstack-vms-rs' replicated size 3 min_size 2 crush_rule 22
> object_hash rjenkins pg_num 512 pgp_num 512 pg_num_target 256
> pgp_num_target 256 autoscale_mode warn last_change 84769 lfor 0/0/55294
> flags hashpspool,nodelete,selfmanaged_snaps stripe_width 0 application rbd
>
> The pgp_num_target is set, but pgp_num not set.
>
> I have scale out new OSDs and is backfilling before setting the value,
> is it the reason?
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux