On Wed, Feb 15, 2017 at 01:38:48PM -0800, Benjamin Serebrin wrote: > On Wed, Feb 15, 2017 at 11:17 AM, Michael S. Tsirkin <mst@xxxxxxxxxx> wrote: > > > Right. But userspace knows it's random at least. If kernel supplies > > affinity e.g. the way your patch does, userspace ATM accepts this as a > > gospel. > > The existing code supplies the same affinity gospels in the #vcpu == > #queue case today. > And the patch (unless it has a bug in it) should not affect the #vcpu > == #queue case's > behavior. I don't quite understand what property we'd be changing > with the patch. > > Here's the same dump of smp_affinity_list, on a 16 VCPU machine with > unmodified kernel: > > 0 > 0 > 1 > 1 > 2 > 2 > [..] > 15 > 15 > > And xps_cpus > 00000001 > 00000002 > [...] > 00008000 > > This patch causes #vcpu != #queue case to follow the same pattern. > > > Thanks again! > Ben The logic is simple really. With #VCPUs == #queues we can reasonably assume this box is mostly doing networking so we can set affinity the way we like. With VCPUs > queues clearly VM is doing more stuff so we need a userspace policy to take that into account, we don't know ourselves what is the right thing to do. Arguably for #VCPUs == #queues we are not always doing the right thing either but I see this as an argument to move more smarts into core kernel not for adding more dumb heuristics in the driver. -- MST