Re: [PATCH V10 6/8] PCI/P2PDMA: Add a 10-Bit Tag check in P2PDMA

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 2021-10-27 7:39 p.m., Bjorn Helgaas wrote:
> On Wed, Oct 27, 2021 at 05:41:07PM -0600, Logan Gunthorpe wrote:
>> On 2021-10-27 5:11 p.m., Bjorn Helgaas wrote:
>>>> @@ -532,6 +577,9 @@ calc_map_type_and_dist(struct pci_dev *provider, struct pci_dev *client,
>>>>  		map_type = PCI_P2PDMA_MAP_NOT_SUPPORTED;
>>>>  	}
>>>>  done:
>>>> +	if (pci_10bit_tags_unsupported(client, provider, verbose))
>>>> +		map_type = PCI_P2PDMA_MAP_NOT_SUPPORTED;
>>>
>>> I need to be convinced that this check is in the right spot to catch
>>> all potential P2PDMA situations.  The pci_p2pmem_find() and
>>> pci_p2pdma_distance() interfaces eventually call
>>> calc_map_type_and_dist().  But those interfaces don't actually produce
>>> DMA bus addresses, and I'm not convinced that all P2PDMA users use
>>> them.
>>>
>>> nvme *does* use them, but infiniband (rdma_rw_map_sg()) does not, and
>>> it calls pci_p2pdma_map_sg().
>>
>> The rules of the current code is that calc_map_type_and_dist() must be
>> called before pci_p2pdma_map_sg(). The calc function caches the mapping
>> type in an xarray. If it was not called ahead of time,
>> pci_p2pdma_map_type() will return PCI_P2PDMA_MAP_NOT_SUPPORTED, and the
>> WARN_ON_ONCE will be hit in
>> pci_p2pdma_map_sg_attrs().
> 
> Seems like it requires fairly deep analysis to prove all this.  Is
> this something we don't want to put directly in the map path because
> it's a hot path, or it just doesn't fit there in the model, or ...?

Yes, that's pretty much what my next patch set does. It just took a
while to get there (adding the xarray, etc).

>> Both NVMe and RDMA (only used in the nvme fabrics code) do the correct
>> thing here and we can be sure calc_map_type_and_dist() is called before
>> any pages are mapped.
>>
>> The patch set I'm currently working on will ensure that
>> calc_map_type_and_dist() is called before anyone maps a PCI P2PDMA page
>> with dma_map_sg*().
>>
>>> amdgpu_dma_buf_attach() calls pci_p2pdma_distance_many() but I don't
>>> know where it sets up P2PDMA transactions.
>>
>> The amdgpu driver hacked this in before proper support was done, but at
>> least it's using pci_p2pdma_distance_many() presumably before trying any
>> transfer. Though it's likely broken as it doesn't take into account the
>> mapping type and thus I think it always assumes traffic goes through the
>> host bridge (seeing it doesn't use pci_p2pdma_map_sg()).
> 
> What does it mean to go through the host bridge?  Obviously DMA to
> system memory would go through the host bridge, but this seems
> different.  Is this a "between PCI hierarchies" case like to a device
> below a different root port?  I don't know what the tag rules are for
> that.

It means both devices are connected to the host bridge without a switch.
So TLPs are routed through the route complex and thus would be affected
by the IOMMU. I also don't know how the tag rules apply here. But the
code in this patch will ensure that no two devices with different tag
sizes will ever use p2pdma in any case.

Logan



[Index of Archives]     [DMA Engine]     [Linux Coverity]     [Linux USB]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [Greybus]

  Powered by Linux