Re: PGs increasing number

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Ceph is gradually migrating object data to the new placement groups.
Eventually pgp_num will reach 256. It might take a few days.

I don't know about removed_snaps_queue, I don't think it is related to the
placement group change. You can search the mailing list archive for more
information.

On Sat, 9 Mar 2024 at 18:01, Michel Niyoyita <micou12@xxxxxxxxx> wrote:

> Hello Pierre
>
> Thank you for your reply this is the output of the above command.
>
> pool 6 'volumes' replicated size 3 min_size 2 crush_rule 0 object_hash
> rjenkins
> pg_num 256 pgp_num 174 pgp_num_target 256 autoscale_mode off last_change
> 132132
> lfor 0/0/130849 flags hashpspool,selfmanaged_snaps stripe_width 0
> application rbd
>         removed_snaps_queue
> [1465c~12,1466f~10,14683~1,14686~47,146cf~2,146d2~5,146d8~1,14713~12,
>
> 1474a~b,1476d~c,14784~1a,147ae~2,147ca~18,147ee~4,147f3~5,147f9~3,147fd~1,147ff~1,14803~1,14807~20,
>
> 14829~18,14842~b,148cb~5,148dc~2,148e0~37,1491b~3,14922~1f,14948~10,14967~3,1496b~2,1498f~9,149a8~15,
>
> 149dd~5,149f4~5,149fe~6,14a1b~1c,14a47~2,14a57~1b,14a74~1,14a77~8,14a90~5,14a9e~16,14ac3~18,14ae1~7,
>
> 14ae9~6,14b0b~10,14b33~7,14b4f~23,14b95~b,14bab~3e,14bea~8,14c02~2,14c10~11,14c29~1,14c2c~9,14c36~18,
>
> 14c68~19,14c97~10,14cbc~7,14ce1~9,14cfa~f,14d1a~5,14d36~b,14d4c~1e,14d6c~9,14d91~21,14db4~1,14db6~36,
>
> 14ded~7,14dfc~3,14e04~14,14e2e~9,14e64~6,14e70~4,14ea1~b,14ec8~1c,14ee9~11,14f09~5,14f13~7,14f1c~2,
>
> 14f3a~24,14f5f~36,14f96~3,14fd1~3,14fd7~1,14fe1~13,15028~21,1504a~1,1505d~1,1506b~6,1507c~23,150a6~9,
>
> 150bb~22,150ed~7,150f5~3d,15138~4,15140~1,15142~9,15150~1,1515e~1,1517c~7,151a3~15,151c1~5,151d7~5,
>
> 151ed~6,15217~1c,15243~2,15253~2,15257~1,15259~c,152af~6,152c6~a,152d1~1,153b6~b]
>
> is this normal?
>
> Michel
>
> On Sat, Mar 9, 2024 at 5:29 PM Pierre Riteau <pierre@xxxxxxxxxxxx> wrote:
>
>> Hi Michel,
>>
>> This is expected behaviour. As described in Nautilus release notes [1],
>> the `target_max_misplaced_ratio` option throttles both balancer activity
>> and automated adjustments to pgp_num (normally as a result of pg_num
>> changes). Its default value is .05 (5%).
>>
>> Use `ceph osd pool ls detail` to monitor your pool's pg_num and pgp_num
>> values. They should gradually be increasing towards 256.
>>
>> [1] https://docs.ceph.com/en/latest/releases/nautilus/
>>
>> On Sat, 9 Mar 2024 at 09:39, Michel Niyoyita <micou12@xxxxxxxxx> wrote:
>>
>>> Hello team,
>>>
>>> I have increased my volumes pool which was 128 PGs to 256 PGs , the
>>> activity started yesterday 5PM , It started when it was 5.733% of
>>> misplaced
>>> object , after 4 to 5 hours it reaches to 5.022 % after that it come back
>>> to the initial percentage 5.733% , kindly help to solve the issue. I am
>>> using Ceph Pacific deployed using ceph-ansible with ubuntu OS. the
>>> cluster
>>> is in production with openstack hypervisor.
>>>
>>> Best regards
>>>
>>> Michel
>>> _______________________________________________
>>> ceph-users mailing list -- ceph-users@xxxxxxx
>>> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>>>
>>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux