One more point - from a business perspective, this flag is per pool and not global so the API should be per pool and not global.
Regards,
Josh
On Sat, Jun 17, 2023 at 4:22 PM Ilya Dryomov <idryomov@xxxxxxxxx> wrote:
On Fri, Jun 16, 2023 at 5:13 PM Casey Bodley <cbodley@xxxxxxxxxx> wrote:
>
> On Fri, Jun 16, 2023 at 3:21 AM Jiffin Thottan <jthottan@xxxxxxxxxx> wrote:
> >
> > Hi,
> >
> > I am planning to support read localise feature for RGW servers similar to what RBD volumes support. From code reading, it looks to pass "librados::OPERATION_LOCALIZE_READS" before sending the request to RADOS. I have created a tracker issue for this feature https://tracker.ceph.com/issues/61701, this will be per server config option. I don't know about any other technical hurdles in implementing this feature. Please share your thoughts on the same.
> >
> > Thanks and regards,
> > Jiffin
>
> thanks Jiffin,
>
> that sounds simple enough, though rgw issues librados reads in many
> different places that would need to manage this flag
>
> i'm curious why this is a per-op flag, rather than a global librados
> setting. is there a reason that rgw would only want *some* of its
> reads localized? are there any implications for read-after-write
> consistency that we need to worry about here?
Hi Casey,
I'm not aware of any read-after-write or other consistency
implications. Read-from-replica is safe for general use since Octopus
(OSD-side issues were fixed in [1]).
Ultimately all flags are per-op. The reason that there is no global
librados setting for BALANCE_READS and LOCALIZE_READS seems to be
a combination of a) the fact that these flags could only be used in
special circumstances in the past and b) the fact that librados doesn't
have a generic API for applying a given flag globally like it does for
applying a given snapshot ID/context to all reads/writes.
[1] https://github.com/ceph/ceph/pull/32381
Thanks,
Ilya
_______________________________________________
Dev mailing list -- dev@xxxxxxx
To unsubscribe send an email to dev-leave@xxxxxxx
_______________________________________________ Dev mailing list -- dev@xxxxxxx To unsubscribe send an email to dev-leave@xxxxxxx