Re: Multisite reshard stale instances

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hey there again,

On 01/10/2021 17:35, Szabo, Istvan (Agoda) wrote:
In my setup I've disabled the sharding and preshard each bucket which needs more then 1.1 millions of objects.

I also use 11 shards as default, see my ML post https://lists.ceph.io/hyperkitty/list/ceph-users@xxxxxxx/thread/UFXPAINBV3DQXABSPY5XLMYFA3UGF5LF/#OK7XMNRFHTF3EQU6SAWPLKEVGVNV4XET


I don't think it's possible to cleanup, even if you run the command with the really-really mean it, it will not do anything, I've tried already.

Searching a little more through older ML posts it appears we are not he only ones and also that those "stale" instances are to be expected when deleting buckets in a multisite setup:

 * http://lists.ceph.com/pipermail/ceph-users-ceph.com/2019-March/033575.html  * https://lists.ceph.io/hyperkitty/list/ceph-users@xxxxxxx/thread/7CQZY6D2HLPLZAWKQPT4D74WLQ6GE3U5/#ZLAFDLS4MKOUPAIWRY73IBYJCVFYMECB

but even after running "data sync --init" again I still see stale instances, but both metadata and data are "caught up" on both sites.

So there is no reason those instances are still kept? How and when are those instances cleared up? Also just like for the other reporters of this issue, in my case most buckets are deleted buckets, but not all of them.


I just hope somebody with a little more insight on the mechanisms at play here
joins this conversation.


Regards


Christian





_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux