On Mon, Jun 29, 2015 at 10:45:03AM +0100, Will Deacon wrote: > On Mon, Jun 29, 2015 at 08:45:44AM +0100, Andreas Herrmann wrote: > > > > With current code the number of threads added to the thread_pool > > equals number of online CPUs. Thus on an OcteonIII cn78xx system we > > usually have 48 threads per guest just for the thread_pool. IMHO this > > is overkill for guests that just have a few vCPUs and/or if a guest is > > pinned to a subset of host CPUs. E.g. > > > > # numactl -C 4,5,7,8 ./lkvm run -c 2 -m 256 -k paravirt -d rootfs.ext3 ... > > # ps -La | grep threadpool-work | wc -l > > 48 > > > > Don't change default behaviour (for sake of compatibility) but > > introduce a new parameter ("-t" or "--threads") that allows to specify > > number of threads to be created for the thread_pool: > > > > # numactl -C 4,5,7,8 ./lkvm run -c 2 -m 256 --threads 4 -k paravirt -d ... > > # ps -La | grep threadpool-work | wc -l > > 4 > > We should probably bound this on some minimum value. I assume things go > pear-shaped if you pass --threads 1 (or 0, or -1)? Ouch, yes, range must be checked (esp. for -1). I think the passed value should be in [1, number of online CPUs]. Andreas -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html