Re: rgw quota cache

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I'm agree with you because this causes me 5hours downtime and I think
there's no more need to have at this level quota checking in high
scale clusters.
Thanks for your help.

On Mon, Jul 6, 2020 at 6:56 PM Casey Bodley <cbodley@xxxxxxxxxx> wrote:
>
> It looks like these messages are related to the config variable
> rgw_bucket_quota_soft_threshold, which defaults to 0.95. I dug through
> the git history and found this was added in a 2013 commit
> https://github.com/ceph/ceph/commit/14eabd4aa7b8a2e2c0c43fe7f877ed2171277526.
>
> I guess the reasoning there is that, once a bucket is close to hitting
> its quota, we want our quota checks to be 'exact' instead of using the
> cache. But these quota checks can be extremely expensive for sharded
> buckets, and the checks aren't atomic with the writes anyway. The
> change long predates dynamic resharding, and I don't think it's
> reasonable anymore. I'd support reverting that commit entirely. What
> does everyone else think?
>
> On Mon, Jul 6, 2020 at 6:44 AM Seena Fallah <seenafallah@xxxxxxxxx> wrote:
> >
> > Hi all.
> >
> > I'm facing this log on my rgw instances and this seems the reason that
> > I have so high iops on my buckets.index pool.
> >
> > 2020-07-04 18:15:08.472 7f15b37fa700 20 quota: can't use cached stats,
> > exceeded soft threshold (size): 515396075520 >= 489626271744
> >
> > Can someone help me on this?
> >
> > Thanks.
> > _______________________________________________
> > Dev mailing list -- dev@xxxxxxx
> > To unsubscribe send an email to dev-leave@xxxxxxx
> >
>
_______________________________________________
Dev mailing list -- dev@xxxxxxx
To unsubscribe send an email to dev-leave@xxxxxxx



[Index of Archives]     [CEPH Users]     [Ceph Devel]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux