On Fri, Aug 14, 2015 at 12:14 PM, Goswin von Brederlow <goswin-v-b@xxxxxx> wrote: > On Mon, May 18, 2015 at 05:13:36PM +0200, Miklos Szeredi wrote: >> This part splits out an "input queue" and a "processing queue" from the >> monolithic "fuse connection", each of those having their own spinlock. >> >> The end of the patchset adds the ability to "clone" a fuse connection. This >> means, that instead of having to read/write requests/answers on a single fuse >> device fd, the fuse daemon can have multiple distinct file descriptors open. >> Each of those can be used to receive requests and send answers, currently the >> only constraint is that a request must be answered on the same fd as it was read >> from. >> >> This can be extended further to allow binding a device clone to a specific CPU >> or NUMA node. > > How will requests be distributed across clones? > > Is the idea here to start one clone per core and have IO requests > originating from one core to be processed by the fuse clone on the > same core? I remember there was a noticeable speedup when request and > processing where on the same core. > > How is the clone for each request choosen? What if there is no clone > pinned to the same core? Will it pick the clone nearest in NUMA terms? > Will it round-robin? Will it load balance to the clone with least > number of requests pending? What if one clone stops processing requests? Good questions. I guess, first implementation should be the simplest possible. E.g. use the queue that matches (in this order): - CPU - NUMA node - any (round robin or whatever) I woudn't worry about load balancing and unresponsive queues until such issues come up in real life. Thanks, Miklos -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html