RE: [PATCH] hv: fix msi affinity when device requests all possible CPU's

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Patch still needed for 4.12

-----Original Message-----
From: Jork Loeser 
Sent: Thursday, June 29, 2017 3:08 PM
To: stephen@xxxxxxxxxxxxxxxxxx; KY Srinivasan <kys@xxxxxxxxxxxxx>; bhelgaas@xxxxxxxxxx
Cc: linux-pci@xxxxxxxxxxxxxxx; devel@xxxxxxxxxxxxxxxxxxxxxx; Stephen Hemminger <sthemmin@xxxxxxxxxxxxx>
Subject: RE: [PATCH] hv: fix msi affinity when device requests all possible CPU's

> -----Original Message-----
> From: Stephen Hemminger [mailto:stephen@xxxxxxxxxxxxxxxxxx]
> Sent: Wednesday, June 28, 2017 4:22 PM
> To: KY Srinivasan <kys@xxxxxxxxxxxxx>; bhelgaas@xxxxxxxxxx
> Cc: linux-pci@xxxxxxxxxxxxxxx; devel@xxxxxxxxxxxxxxxxxxxxxx; Stephen
> Hemminger <sthemmin@xxxxxxxxxxxxx>
> Subject: [PATCH] hv: fix msi affinity when device requests all possible CPU's
> 
> When Intel 10G (ixgbevf) is passed to a Hyper-V guest with SR-IOV, the driver
> requests affinity with all possible CPU's (0-239) even those CPU's are not online
> (and will never be). Because of this the device is unable to correctly get MSI
> interrupt's setup.
> 
> This was caused by the change in 4.12 that converted this affinity into all
> possible CPU's (0-31) but then host reports an error since this is larger than the
> number of online cpu's.
> 
> Previously, this worked (up to 4.12-rc1) because only online cpu's would be put
> in mask passed to the host.
> 
> This patch applies only to 4.12.
> The driver in linux-next needs a a different fix because of the changes to PCI
> host protocol version.

The vPCI patch in linux-next has the issue fixed already.

Regards,
Jork




[Index of Archives]     [DMA Engine]     [Linux Coverity]     [Linux USB]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [Greybus]

  Powered by Linux