Re: [PATCH v2 net-next] virtio: Fix affinity for #VCPUs != #queue pairs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Feb 14, 2017 at 1:05 PM, Michael S. Tsirkin <mst@xxxxxxxxxx> wrote:
> On Tue, Feb 14, 2017 at 11:17:41AM -0800, Benjamin Serebrin wrote:
>> On Wed, Feb 8, 2017 at 11:37 AM, Michael S. Tsirkin <mst@xxxxxxxxxx> wrote:
>>
>> > IIRC irqbalance will bail out and avoid touching affinity
>> > if you set affinity from driver.  Breaking that's not nice.
>> > Pls correct me if I'm wrong.
>>
>>
>> I believe you're right that irqbalance will leave the affinity alone.
>>
>> Irqbalance has had changes that may or may not be in the versions bundled with
>> various guests, and I don't have a definitive cross-correlation of irqbalance
>> version to guest version.  But in the existing code, the driver does
>> set affinity for #VCPUs==#queues, so that's been happening anyway.
>
> Right - the idea being we load all CPUs equally so we don't
> need help from irqbalance - hopefully packets will be spread
> across queues in a balanced way.
>
> When we have less queues the load isn't balanced so we
> definitely need something fancier to take into account
> the overall system load.

For pure network load, assigning each txqueue IRQ exclusively
to one of the cores that generates traffic on that queue is the
optimal layout in terms of load spreading. Irqbalance does
not have the XPS information to make this optimal decision.

Overall system load affects this calculation both in the case of 1:1
mapping uneven queue distribution. In both cases, irqbalance
is hopefully smart enough to migrate other non-pinned IRQs to
cpus with lower overall load.

> But why the first N cpus? That's more or less the same as assigning them
> at random.

CPU selection is an interesting point. Spreading equally across numa
nodes would be preferable over first N. Aside from that, the first N
should work best to minimize the chance of hitting multiple
hyperthreads on the same core -- if all architectures lay out
hyperthreads in the same way as x86_64.



[Index of Archives]     [Linux Kernel]     [Kernel Newbies]     [x86 Platform Driver]     [Netdev]     [Linux Wireless]     [Netfilter]     [Bugtraq]     [Linux Filesystems]     [Yosemite Discussion]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Device Mapper]

  Powered by Linux