On Fri, Oct 26, 2012 at 8:08 AM, Chris Metcalf <cmetcalf@xxxxxxxxxx> wrote: > Cyberman: it seems like your bias hack is working for you. But, as Bjorn > says, this sounds like a driver bug. What happens if you just revert your > changes, but then in mvsas.c change the "if (!res_start || !res_len)" to > just say "if (!res_len)"? That seems like the true error test. If that > works, you should submit that change to the community. I don't *think* that is going to be enough, even with the kernel that has some I/O space support, because both devices are assigned identical resources: pci 0000:01:00.0: BAR 2: assigned [io 0x0000-0x007f] pci 0001:01:00.0: BAR 2: assigned [io 0x0000-0x007f] The I/O space support that's there is broken because we think the same I/O range is available on both root buses, which is probably not the case: pci_bus 0000:00: resource 0 [io 0x0000-0xffffffff] pci_bus 0001:00: resource 0 [io 0x0000-0xffffffff] If mvsas really doesn't need the I/O BAR, I think it's likely that making it use pci_enable_device_mem() will make both devices work even without I/O space support in the kernel. > Bjorn et al: does it seem reasonable to add a bias to the mappings so that > we never report a zero value as valid? This may be sufficiently defensive > programming that it's just the right thing to do regardless of whether > drivers are technically at fault or not. If so, what's a good bias? (I'm > inclined to think 64K rather than 4K.) I/O space is very limited to begin with (many architectures only *have* 64K), so I hesitate to add a bias in the PCI core. But we do something similar in arch_remove_reservations(), and I think you could implement it that way if you wanted to. Bjorn -- To unsubscribe from this list: send the line "unsubscribe linux-pci" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html