Re: [kvm-unit-tests PATCH v8 10/12] pci: Add generic ECAM host support

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 01.11.2016 19:57, Alexander Gordeev wrote:
> On Fri, Oct 21, 2016 at 06:30:21PM +0200, Andrew Jones wrote:
>>> +static bool pci_alloc_resource(u64 *addr, u32 bar, u64 size)
>>> +{
>>> +	struct pci_host_bridge *host = pci_host_bridge;
>>> +	struct pci_addr_space *as = &host->addr_space[0];
>>> +	u32 mask;
>>> +	int i;
>>> +
>>> +	for (i = 0; i < host->nr_addr_spaces; i++) {
>>> +		if (as->type == pci_bar_type(bar))
>>> +			break;
>>> +		as++;
>>> +	}
>>> +	if (i >= host->nr_addr_spaces) {
>>> +		printf("No PCI resource matching a device found\n");
>>> +		return false;
>>
>> So this is the new code with v8, and it needs more work. First,
>> this print message doesn't explain the problem. It should point
>> out which device and which bar failed to have resources allocated
>> and why.
>>
>> For the which, I think it should print the device number and function
>> number in the lspci way %02x.%02x.%1x (bus.dev.fn) followed by the
>> vendor-id and device-id, %04x:%04x. For the bar you'll want to
>> refactor pci_dev_print to pull a pci_bar_print out of it that you
>> can use here.
>>
>> But... what about the why? Is it really necessary to fail this
>> allocation? How does Linux handle it with the same guest config?
> 
> Just to check we are on the same page - by "fail" you mean here
> the failure to assign a resource, not bailing out due to a mis-
> configuration, right?
> 
>> So I went a head and improved the print message in order to try
>> to understand, and I think I do. The virtio-net-pci device that
>> gets added by default wanted a MEM64/p region. We fail because
>> of the '/p'. I don't know PCI, but shouldn't we be masking off
>> that PCI_BASE_ADDRESS_MEM_PREFETCH bit in this check?
> 
> I guess, if we could mask /p we would not have it in th first place.
> I recall some ugly side effects in case prefetch memory mismatch,
> but can not tell for sure.
> 
> @Thomas, may be you know the answer here?

Well, you can not map a device with a non-prefetchable BAR into an MMIO
region that is treated as prefetchable from the CPU. According to the
PCI LB specification:

 "Prefetchable memory has the following characteristics:
 *  There are no side effects of a read operation. The read operation
    cannot be destructive to either the data or any other state
    information. For example, a FIFO that advances to  the next data
    when read would not be prefetchable. Similarly, a location that
    cleared a status bit when read would not be prefetchable.
 *  When read, the device is required to return all bytes regardless of
    the byte enables (four or eight depending upon the width of the
    data transfer (refer to Section 3.8.1)).
 *  Bridges are permitted to merge writes into this range (refer to
    Section 3.2.6)."

But you can do it the other way round, you can map a BAR that is marked
as prefetchable into an MMIO region that is treated as non-prefetchable
(e.g. non-cachable) by the CPU.

Hope that helps,
 Thomas


--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux