I've sent a pr(https://github.com/ceph/ceph/pull/25196) for the issue below, which might help.
原始邮件
发件人:ReneDiepstraten <rene@xxxxxxxxxxxx>
收件人:Dan van der Ster <dan@xxxxxxxxxxxxxx>;
抄送人:ceph-users <ceph-users@xxxxxxxxxxxxxx>;
日 期 :2018年11月21日 05:26
主 题 :Re: [ceph-users] Stale pg_upmap_items entries after pg increase
Thanks very much, I can use this.
It would be nice if the balancer module had functionality to
check/cleanup these stale entries.
I may create an issue for this.
On 20/11/2018 17:37, Dan van der Ster wrote:
> I've noticed the same and have a script to help find these:
>
> https://github.com/cernceph/ceph-scripts/blob/master/tools/clean-upmaps.py
>
> -- dan
>
> On Tue, Nov 20, 2018 at 5:26 PM Rene Diepstraten <rene@xxxxxxxxxxxx> wrote:
>>
>> Hi.
>>
>> Today I've been looking at upmap and the balancer in upmap mode.
>> The balancer has run previously in upmap mode and today, after
>> expansion, I have increased the pgs of two pools.
>>
>> I found that there are pg_upmap_items that redirect from osds that are
>> not active for the pg:
>>
>> See this pg, which has an upmap redirect from osd.6 to osd.14
>> ```
>> root@mon01:~# ceph osd dump | grep upmap | grep -w '2\.81'
>> pg_upmap_items 2.81 [6,14]
>> ```
>>
>> The pg actually is present on other osds:
>> ```
>> root@mon01:~# ceph pg dump | awk '/^2\.81/ {printf "PG %s is active on
>> osds %s\n", $1, $15}'
>> dumped all
>> PG 2.81 is active on osds [39,30,51]
>> ```
>>
>> The pg 2.81 is active+clean , so there's no reference to osd.6 or osd.14
>> anywhere.
>>
>> Is this expected behaviour? Is there any way to 'cleanup' the upmap
>> entries to remove these stale ones?
>>
>> Thanks in advance.
>>
>>
>> Kind regards,
>>
>> René Diepstraten
>> PCextreme B.V.
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
It would be nice if the balancer module had functionality to
check/cleanup these stale entries.
I may create an issue for this.
On 20/11/2018 17:37, Dan van der Ster wrote:
> I've noticed the same and have a script to help find these:
>
> https://github.com/cernceph/ceph-scripts/blob/master/tools/clean-upmaps.py
>
> -- dan
>
> On Tue, Nov 20, 2018 at 5:26 PM Rene Diepstraten <rene@xxxxxxxxxxxx> wrote:
>>
>> Hi.
>>
>> Today I've been looking at upmap and the balancer in upmap mode.
>> The balancer has run previously in upmap mode and today, after
>> expansion, I have increased the pgs of two pools.
>>
>> I found that there are pg_upmap_items that redirect from osds that are
>> not active for the pg:
>>
>> See this pg, which has an upmap redirect from osd.6 to osd.14
>> ```
>> root@mon01:~# ceph osd dump | grep upmap | grep -w '2\.81'
>> pg_upmap_items 2.81 [6,14]
>> ```
>>
>> The pg actually is present on other osds:
>> ```
>> root@mon01:~# ceph pg dump | awk '/^2\.81/ {printf "PG %s is active on
>> osds %s\n", $1, $15}'
>> dumped all
>> PG 2.81 is active on osds [39,30,51]
>> ```
>>
>> The pg 2.81 is active+clean , so there's no reference to osd.6 or osd.14
>> anywhere.
>>
>> Is this expected behaviour? Is there any way to 'cleanup' the upmap
>> entries to remove these stale ones?
>>
>> Thanks in advance.
>>
>>
>> Kind regards,
>>
>> René Diepstraten
>> PCextreme B.V.
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com