On Tue, Feb 28, 2017 at 10:02:53AM +0100, Andrew Jones wrote: > On Tue, Feb 28, 2017 at 03:02:25PM +0800, Peter Xu wrote: > > On Mon, Feb 27, 2017 at 03:12:34PM +0100, Alexander Gordeev wrote: > > > Array pci_dev::resource[] is ambiguous wrt what is > > > actually stored in its elements and how to interpret > > > the index into the array. > > > > > > It is simple in case a device has only 32-bit BARs - > > > an element pci_dev::resource[bar_num] contains the > > > decoded address of BAR # bar_num. > > > > > > But what if a device has 64-bit BAR starting at bar_num? > > > > > > Curretnly pci_dev::resource[bar_num] contains the decoded > > > address of the BAR, while pci_dev::resource[bar_num + 1] > > > contains 0. That makes meaning of (bar_num + 1) index > > > difficult to understand. > > > > > > On the one hand it is an index of high 32-bit part of > > > the 64-bit address in the device itself. But why then > > > the element contains 0, not the high part of the address > > > or INVALID_PHYS_ADDRESS for example? > > > > > > By placing the same 64-bit address in both bar_num and > > > (bar_num + 1) elements the ambiguity is less striking, > > > since: > > > - the meaning of bar_num kept consistent with the rest > > > of the functions (where it refers 32-bit BAR in terms > > > of the device configuration address space); > > > - pci_dev::resource[bar_num + 1] contains a valid address > > > rather than vague value of 0. > > > - both bar_num and (bar_num + 1) indexes refer to the > > > same 64-bit BAR and therefore return the same address; > > > The notion of low and high parts of a 64-bit address > > > is ignored, but that is fine, since pci_dev::resource[] > > > contain only full addresses; > > > > IIUC for a general PCI device driver, it should know which bars are > > used for specific device, the type of the bar (whether it would be > > 64bit), for what purpose, etc.. Then, the driver should just avoid > > touching the other bar (in our case, bar_num+1). So here I don't quite > > catch why we need to explicitly have res[bar_num+1] contain the same > > content as res[bar_num]. Do we really have a use case? > > kvm-unit-test drivers can be unit tests. I prefer that common code > provide asserts allowing unit test writers to quickly determine > when their test has been incorrectly written, rather than forcing > them to read the common code in order to debug the issue. Currently > it's not possible to ensure a driver (unit test) does not attempt > to use invalid bars or 64bit bars in invalid ways. This series > prepares for some asserts to get sprinkled into the common code > and/or offer a better "is valid" API. Yeah. I agree that for unit tests we should just crash the test when something wrong happened. But would this patch help this purpose? Both res[num] and res[num+1] are written with the same address, then how would the test writter know that "fetching address from num+1 is actually wrong"? > > > > > And, looks like this patch has just overwritten previous patch (patch > > 2). IMHO I would slightly prefer that one since INVALID_PHYS_ADDR at > > least showed that we fetched something wrong. > > Not really. The INVALID_PHYS_ADDR is still assigned to unused bars > in the else {} below. Ah yes. Sorry I didn't notice that. Then I'll prefer touching up subject of patch 2 from: "pci: Do not use 0 for unimplemented BARs in pci_dev::resource[]" into: "pci: Do not use 0 for high dword of 64bit BARs" (or something similar) since actually before this patch we just skipped unimplemented bars, while this patch set them up with INVALID_PHYS_ADDR. Thanks, -- peterx