Hi, On Thu, Mar 22, 2018 at 5:17 PM, Yehuda Sadeh-Weinraub <ysadehwe@xxxxxxxxxx> wrote: > On Thu, Mar 22, 2018 at 12:09 PM, Casey Bodley <cbodley@xxxxxxxxxx> wrote: > > That does sound like the simpler solution that should be good enough > starting point. What if we could integrate it in a much lower layer, > e.g., into librados? I'm not clear what the change in layering would do to address the objection Eric and Casey raised to costing chunks, but I do know I don't want to lose the articulation at the request/rgwop level. Does the coming ink up with OSD client dmclock in librados cover the ground at the level of rados ops? > >> New virtual functions in class RGWOp seem like a good way for the derived >> Ops to return their request class and cost. Once we know those, we can add >> ourselves to the mclock priority queue and do an async wait until its our >> turn to run. That sound attractive. > >> >> The priority queue can use perf counters for introspection, and a config >> observer to apply changes to the per-client mclock options. >> >> As future work, we could add some load balancer integration to: >> - enable custom scripts that look at incoming requests and assign their own >> request class/cost >> - track distributed client stats across gateways, and feed that info back >> into radosgw with each request (this is the d in dmclock) There seems to be community interest in this idea. >> >> Thanks, >> Casey Matt -- Matt Benjamin Red Hat, Inc. 315 West Huron Street, Suite 140A Ann Arbor, Michigan 48103 http://www.redhat.com/en/technologies/storage tel. 734-821-5101 fax. 734-769-8938 cel. 734-216-5309 -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html