On Thu, 26 Sep 2019 at 22:45, Rob Herring <robh@xxxxxxxxxx> wrote: > > On Wed, Sep 25, 2019 at 5:37 AM Andrew Murray <andrew.murray@xxxxxxx> wrote: > > > > On Tue, Sep 24, 2019 at 04:46:24PM -0500, Rob Herring wrote: > > > Convert ARM Versatile host bridge to use the common > > > pci_parse_request_of_pci_ranges(). > > > > > > Cc: Lorenzo Pieralisi <lorenzo.pieralisi@xxxxxxx> > > > Cc: Bjorn Helgaas <bhelgaas@xxxxxxxxxx> > > > Signed-off-by: Rob Herring <robh@xxxxxxxxxx> > > > --- > > > static int versatile_pci_probe(struct platform_device *pdev) > > > { > > > struct device *dev = &pdev->dev; > > > struct resource *res; > > > - int ret, i, myslot = -1; > > > + struct resource_entry *entry; > > > + int ret, i, myslot = -1, mem = 0; > > > > I think 'mem' should be initialised to 1, at least that's what the original > > code did. However I'm not sure why it should start from 1. > > The original code I moved from arch/arm had 32MB @ 0x0c000000 called > "PCI unused" which was requested with request_resource(), but never > provided to the PCI core. Otherwise, I kept the setup the same. No one > has complained in 4 years, though I'm not sure anyone would have > noticed if I just deleted PCI support... Yes, QEMU users will notice if you drop or break PCI support :-) I don't think anybody is using real hardware PCI though. Anyway, the 'mem' indexes here matter because you're passing them to PCI_IMAP() and PCI_SMAP(), which are writing to hardware registers. If you write to PCI_IMAP0 when we were previously writing to PCI_IMAP1 then suddenly you're not configuring the behaviour for accesses to the PCI window that's at CPU physaddr 0x50000000, you're configuring the window that's at CPU physaddr 0x44000000, which is entirely different (and notably is smaller, being only 0x0c000000 in size rather than 0x10000000). If this is supposed to be a no-behaviour-change refactor then it would probably be a good test to check that we're writing exactly the same values to the hardware registers on the device as we were before the change. thanks -- PMM