RE: [PATCH v6 7/7] PCI: hv: New paravirtual PCI front-end for Hyper-V VMs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> -----Original Message-----
> From: Jiang Liu [mailto:jiang.liu@xxxxxxxxxxxxxxx]
> Sent: Wednesday, December 2, 2015 7:12 PM
> To: Jake Oshins <jakeo@xxxxxxxxxxxxx>; gregkh@xxxxxxxxxxxxxxxxxxx; KY
> Srinivasan <kys@xxxxxxxxxxxxx>; linux-kernel@xxxxxxxxxxxxxxx;
> devel@xxxxxxxxxxxxxxxxxxxxxx; olaf@xxxxxxxxx; apw@xxxxxxxxxxxxx;
> vkuznets@xxxxxxxxxx; tglx@xxxxxxxxxx; Haiyang Zhang
> <haiyangz@xxxxxxxxxxxxx>; marc.zyngier@xxxxxxx;
> bhelgaas@xxxxxxxxxx; linux-pci@xxxxxxxxxxxxxxx
> Subject: Re: [PATCH v6 7/7] PCI: hv: New paravirtual PCI front-end for Hyper-
> V VMs
> 
> On 2015/11/3 5:33, jakeo@xxxxxxxxxxxxx wrote:
> > From: Jake Oshins <jakeo@xxxxxxxxxxxxx>
> >
[...]
> > +
> > +/**
> > + * hv_irq_unmask() - "Unmask" the IRQ by setting its current
> > + * affinity.
> > + * @data:	Describes the IRQ
> > + *
> > + * Build new a destination for the MSI and make a hypercall to
> > + * update the Interrupt Redirection Table. "Device Logical ID"
> > + * is built out of this PCI bus's instance GUID and the function
> > + * number of the device.
> > + */
> > +void hv_irq_unmask(struct irq_data *data)
> > +{
> > +	struct msi_desc *msi_desc = irq_data_get_msi_desc(data);
> > +	struct irq_cfg *cfg = irqd_cfg(data);
> > +	struct retarget_msi_interrupt params;
> > +	struct hv_pcibus_device *hbus;
> > +	struct cpumask *dest;
> > +	struct pci_bus *pbus;
> > +	struct pci_dev *pdev;
> > +	int cpu;
> > +
> > +	dest = irq_data_get_affinity_mask(data);
> > +	pdev = msi_desc_to_pci_dev(msi_desc);
> > +	pbus = pdev->bus;
> > +	hbus = container_of(pbus->sysdata, struct hv_pcibus_device,
> sysdata);
> > +
> > +	memset(&params, 0, sizeof(params));
> > +	params.partition_id = HV_PARTITION_ID_SELF;
> > +	params.source = 1; /* MSI(-X) */
> > +	params.address = msi_desc->msg.address_lo;
> > +	params.data = msi_desc->msg.data;
> > +	params.device_id = (hbus->hdev->dev_instance.b[5] << 24) |
> > +			   (hbus->hdev->dev_instance.b[4] << 16) |
> > +			   (hbus->hdev->dev_instance.b[7] << 8) |
> > +			   (hbus->hdev->dev_instance.b[6] & 0xf8) |
> > +			   PCI_FUNC(pdev->devfn);
> > +	params.vector = cfg->vector;
> > +
> > +	for_each_cpu_and(cpu, dest, cpu_online_mask)
> > +		params.vp_mask |= (1 <<
> vmbus_cpu_number_to_vp_number(cpu));
> No knowledge about the HV implementation details, but feel some chances
> of race here between hv_irq_unmask(), hv_set_affinity() and
> cpu_up()/cpu_down() when accessing 'dest' and cpu_online_mask.
> 
 
Thanks.  Is there any architectural contract here?  I tried implementing this by doing this work in the set_affinity() callback, but the vector was often wrong when that callback was invoked.  (It seems to get changed just after set_affinity().)  Can you suggest a durable strategy?

I'll respond to all the other comments you sent (and this one, once I understand the right response) and resend.

Thanks for your review,
Jake Oshins

_______________________________________________
devel mailing list
devel@xxxxxxxxxxxxxxxxxxxxxx
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel



[Index of Archives]     [Linux Driver Backports]     [DMA Engine]     [Linux GPIO]     [Linux SPI]     [Video for Linux]     [Linux USB Devel]     [Linux Coverity]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [Yosemite Backpacking]
  Powered by Linux