[PATCH] hv: fix msi affinity when device requests all possible CPU's

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



When Intel 10G (ixgbevf) is passed to a Hyper-V guest with SR-IOV,
the driver requests affinity with all possible CPU's (0-239) even
those CPU's are not online (and will never be). Because of this the device
is unable to correctly get MSI interrupt's setup.

This was caused by the change in 4.12 that converted this affinity
into all possible CPU's (0-31) but then host reports
an error since this is larger than the number of online cpu's.

Previously, this worked (up to 4.12-rc1) because only online cpu's
would be put in mask passed to the host.

This patch applies only to 4.12.
The driver in linux-next needs a a different fix because of the changes
to PCI host protocol version.

Fixes: 433fcf6b7b31 ("PCI: hv: Specify CPU_AFFINITY_ALL for MSI affinity when >= 32 CPUs")
Signed-off-by: Stephen Hemminger <sthemmin@xxxxxxxxxxxxx>
---
 drivers/pci/host/pci-hyperv.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/drivers/pci/host/pci-hyperv.c b/drivers/pci/host/pci-hyperv.c
index 84936383e269..3cadfcca3ae9 100644
--- a/drivers/pci/host/pci-hyperv.c
+++ b/drivers/pci/host/pci-hyperv.c
@@ -900,10 +900,12 @@ static void hv_compose_msi_msg(struct irq_data *data, struct msi_msg *msg)
 	 * processors because Hyper-V only supports 64 in a guest.
 	 */
 	affinity = irq_data_get_affinity_mask(data);
+	cpumask_and(affinity, affinity, cpu_online_mask);
+
 	if (cpumask_weight(affinity) >= 32) {
 		int_pkt->int_desc.cpu_mask = CPU_AFFINITY_ALL;
 	} else {
-		for_each_cpu_and(cpu, affinity, cpu_online_mask) {
+		for_each_cpu(cpu, affinity) {
 			int_pkt->int_desc.cpu_mask |=
 				(1ULL << vmbus_cpu_number_to_vp_number(cpu));
 		}
-- 
2.11.0




[Index of Archives]     [DMA Engine]     [Linux Coverity]     [Linux USB]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [Greybus]

  Powered by Linux