Re: PGs increasing number

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Michel,

This is expected behaviour. As described in Nautilus release notes [1], the
`target_max_misplaced_ratio` option throttles both balancer activity and
automated adjustments to pgp_num (normally as a result of pg_num changes).
Its default value is .05 (5%).

Use `ceph osd pool ls detail` to monitor your pool's pg_num and pgp_num
values. They should gradually be increasing towards 256.

[1] https://docs.ceph.com/en/latest/releases/nautilus/

On Sat, 9 Mar 2024 at 09:39, Michel Niyoyita <micou12@xxxxxxxxx> wrote:

> Hello team,
>
> I have increased my volumes pool which was 128 PGs to 256 PGs , the
> activity started yesterday 5PM , It started when it was 5.733% of misplaced
> object , after 4 to 5 hours it reaches to 5.022 % after that it come back
> to the initial percentage 5.733% , kindly help to solve the issue. I am
> using Ceph Pacific deployed using ceph-ansible with ubuntu OS. the cluster
> is in production with openstack hypervisor.
>
> Best regards
>
> Michel
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux