On Mon, 12 Oct 2015, Deneau, Tom wrote: > Looking at the perf counters on my osds, I see wait counts for the following > throttle related perf counters: (This is from trying to benchmark using > multiple rados bench client processes). > > throttle-filestore_bytes OPTION(filestore_queue_max_ops, OPT_INT, 50) OPTION(filestore_queue_max_bytes, OPT_INT, 100 << 20) > throttle-msgr_dispatch_throttler-client OPTION(ms_dispatch_throttle_bytes, OPT_U64, 100 << 20) > throttle-osd_client_bytes > throttle-osd_client_messages OPTION(osd_client_message_size_cap, OPT_U64, 500*1024L*1024L) // client data allowed in-memory (in bytes) OPTION(osd_client_message_cap, OPT_U64, 100) // num client messages allowed in-memory > What are the config variables that would allow me to experiment with these throttle limits? > (When I look at the output from --admin-daemon osd.xx.asok config show, it's > not clear which items these correspond to). These are all involved in slowing down clients to the rate of the storage... sage -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html