Re: rgw: strong consistency for (bucket) policy settings?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sat, Sep 23, 2023 at 5:05 AM Matthias Ferdinand <mf+ml.ceph@xxxxxxxxx> wrote:
>
> On Fri, Sep 22, 2023 at 06:09:57PM -0400, Casey Bodley wrote:
> > each radosgw does maintain its own cache for certain metadata like
> > users and buckets. when one radosgw writes to a metadata object, it
> > broadcasts a notification (using rados watch/notify) to other radosgws
> > to update/invalidate their caches. the initiating radosgw waits for
> > all watch/notify responses before responding to the client. this way a
> > given client sees read-after-write consistency even if they read from
> > a different radosgw
>
>
> Very nice indeed. Does it completely eliminate any time window of
> incoherent behaviour among rgw daemons (one rgw applying old policy to
> requests, some other rgw already applying new policy), or will it just
> be a very short window?

this model only guarantees a strict ordering for the client that
writes. before we respond to the write request, there's a window where
other racing clients may either see the old or new bucket metadata

>
> thanks
> Matthias
>
> >
> > On Fri, Sep 22, 2023 at 5:53 PM Matthias Ferdinand <mf+ml.ceph@xxxxxxxxx> wrote:
> > >
> > > On Tue, Sep 12, 2023 at 07:13:13PM +0200, Matthias Ferdinand wrote:
> > > > On Mon, Sep 11, 2023 at 02:37:59PM -0400, Matt Benjamin wrote:
> > > > > Yes, it's also strongly consistent.  It's also last writer wins, though, so
> > > > > two clients somehow permitted to contend for updating policy could
> > > > > overwrite each other's changes, just as with objects.
> > >
> > > this would be a tremendous administrative bonus, but could also be a
> > > caching/performance problem.
> > >
> > > Amazon explicitly says they have eventual consistency for caching
> > > reasons:
> > > https://docs.aws.amazon.com/IAM/latest/UserGuide/troubleshoot_general.html#troubleshoot_general_eventual-consistency
> > >
> > > For Dell ECS I don't seem to find it mentioned in their docs, but they
> > > too are eventually consistent.
> > >
> > > I guess the bucket policies in Ceph get written to special rados
> > > objects (strongly consistent by design), but how are rgw daemons
> > > notified about these updates for immediate effect? Or do rgw daemons
> > > re-read the bucket policy for each and every request to this bucket?
> > >
> > > thanks in advance
> > > Matthias
> > >
> > > > > On Mon, Sep 11, 2023 at 2:21 PM Matthias Ferdinand <mf+ml.ceph@xxxxxxxxx>
> > > > > wrote:
> > > > >
> > > > > > Hi,
> > > > > >
> > > > > > while I don't currently use rgw, I still am curious about consistency
> > > > > > guarantees.
> > > > > >
> > > > > > Usually, S3 has strong read-after-write consistency guarantees (for
> > > > > > requests that do not overlap). According to
> > > > > >     https://docs.ceph.com/en/latest/dev/radosgw/bucket_index/
> > > > > > in Ceph this is also true for per-object ACLs.
> > > > > >
> > > > > > Is there also a strong consistency guarantee for (bucket) policies? The
> > > > > > documentation at
> > > > > >     https://docs.ceph.com/en/latest/radosgw/bucketpolicy/
> > > > > > apparently does not say anything about this.
> > > > > >
> > > > > > How would multiple rgw instances synchronize a policy change? Is this
> > > > > > effective immediate with strong consistency or is there some propagation
> > > > > > delay (hopefully on with some upper bound)?
> > > > > >
> > > > > >
> > > > > > Best regards
> > > > > > Matthias
> > > > > > _______________________________________________
> > > > > > ceph-users mailing list -- ceph-users@xxxxxxx
> > > > > > To unsubscribe send an email to ceph-users-leave@xxxxxxx
> > > > > >
> > > > > >
> > > > >
> > > > > --
> > > > >
> > > > > Matt Benjamin
> > > > > Red Hat, Inc.
> > > > > 315 West Huron Street, Suite 140A
> > > > > Ann Arbor, Michigan 48103
> > > > >
> > > > > http://www.redhat.com/en/technologies/storage
> > > > >
> > > > > tel.  734-821-5101
> > > > > fax.  734-769-8938
> > > > > cel.  734-216-5309
> > > > _______________________________________________
> > > > ceph-users mailing list -- ceph-users@xxxxxxx
> > > > To unsubscribe send an email to ceph-users-leave@xxxxxxx
> > > _______________________________________________
> > > ceph-users mailing list -- ceph-users@xxxxxxx
> > > To unsubscribe send an email to ceph-users-leave@xxxxxxx
> >
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux