Re: large bucket index in multisite environement (how to deal with large omap objects warning)?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



If sharding is not option at all, then you can increase
osd_deep_scrub_large_omap_object_key threshold which is not the best idea.
I would still go with resharding which might result in taking offline at
least slave sites. In the future you can set the higher number of shards
during initial creation of buckets which would store a big amount of
objects.

чт, 4 лист. 2021, 19:21 користувач Teoman Onay <tonay@xxxxxxxxxx> пише:

> AFAIK dynamic resharding is not supported for multisite setups but you can
> reshard manually.
> Note that this is a very expensive process which requires you to:
>
> - disable the sync of the bucket you want to reshard.
> - Stops all the RGW (no more access to your Ceph cluster)
> - On a node of the master zone, reshard the bucket
> - On the secondary zone, purge the bucket
> - Restart the RGW(s)
> - re-enable sync of the bucket.
>
> 4m objects/bucket is way to much...
>
> Regards
>
> Teoman
>
> On Thu, Nov 4, 2021 at 5:57 PM Boris Behrens <bb@xxxxxxxxx> wrote:
>
> > Hi everybody,
> >
> > we maintain three ceph clusters (2x octopus, 1x nautilus) that use three
> > zonegroups to sync metadata, without syncing the actual data (only one
> zone
> > per zonegroup).
> >
> > Some customer got buckets with >4m objects in our largest cluster (the
> > other two a very fresh with close to 0 data in it)
> >
> > How do I handle that in regards of the "Large OMAP objects" warning?
> > - Sharding is not an option, because it is a multisite environment (at
> > least thats what I read everywere)
> > - Limiting the customers is not a great option, because he already got
> that
> > huge amount of files in their buckets
> > - disabling the warning / increasing the threashold, is IMHO a bad option
> > (people might have put some thinking in that limit and having 40x the
> limit
> > is far off the "just roll with it" threashold)
> >
> > I really hope that someone does have an answer, or maybe there is some
> > roadmap which addresses this issue.
> >
> > Cheers
> >  Boris
> > _______________________________________________
> > ceph-users mailing list -- ceph-users@xxxxxxx
> > To unsubscribe send an email to ceph-users-leave@xxxxxxx
> >
> >
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux