On Thu, Mar 22, 2018 at 04:40:08PM -0700, Kyle Bader wrote: > >From a capacity planning perspective, it would be fantastic to be able > to limit the request volume per bucket. In Amazon S3, they provide > roughly 300 PUT/LIST/DELETE per second or 800 GET per second. Taking > those values and translating them into sensible default weight seems > like a good start. The ability to scale the limits as the bucket is > sharded would further enhance fidelity with Amazon's behavior. When > you exceed the number of requests per second in Amazon, you get a 503: > "Slow down" error, we should probably do similar. All these things go > a long way in protecting the system from being abused as a k/v store, > misguided tenants can't sap the seeks from folks who are using the > system for appropriately sized objects. > > https://docs.aws.amazon.com/AmazonS3/latest/dev/request-rate-perf-considerations.html > https://docs.aws.amazon.com/AmazonS3/latest/dev/ErrorBestPractices.html Per bucket AND per user. On the DreamHost cluster, I have user-level rate-limiting implemented in HAProxy(via Lua), and this came up during Cephalocon so far, that I'll be sharing that implementation. It's just a request rate limit however, which doesn't cover single clients doing large files at high bandwidth. Most notable however it sounds like FlipKart Internet Pvt has already developed RGW rate limiting, from their con presentation that just ended. Hopefully they can engage on this thread with that implementation already. -- Robin Hugh Johnson Gentoo Linux: Dev, Infra Lead, Foundation Treasurer E-Mail : robbat2@xxxxxxxxxx GnuPG FP : 11ACBA4F 4778E3F6 E4EDF38E B27B944E 34884E85 GnuPG FP : 7D0B3CEB E9B85B1F 825BCECF EE05E6F6 A48F6136 -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html