On 4/29/09, Hollis Blanchard <hollisb@xxxxxxxxxx> wrote: > On Wed, 2009-04-29 at 12:38 -0500, Anthony Liguori wrote: > > Hollis Blanchard wrote: > > > On Wed, 2009-04-29 at 12:38 +0200, Jan Kiszka wrote: > > > > > >> What is the alignment of those regions then? None? And do regions of > > >> different types overlap even on the same page? Maybe the check reveals > > >> some deeper conflict /wrt KVM. Can you point me to the involved code files? > > >> > > > > > > These PCI controllers make separate calls to > > > cpu_register_physical_memory() for separate callbacks. Reading > > > ppce500_pci_init(), for example: > > > 0xe0008000 -> CFGADDR (4 bytes) > > > 0xe0008004 -> CFGDATA (4 bytes) > > > 0xe0008c00 -> other registers > > > > > > > That's goofy. If the single device owns the entire region, it should > > region the entire region instead of relying on subpage functionality. > > > > If just requires a switch() on the address to dispatch to the > > appropriate functions. It should be easy enough to fix. > > There are two cases that share this code path: > 1) same driver registers multiple regions in the same page > 2) different drivers register regions in the same page > > This is case 1, and as you say, we could add a switch statement to > handle it. I did not look closely to see how many other callers fall > into this category. > > However, are you suggesting that case 2 is also "goofy" and will never > work with KVM? It works in qemu today. As long as case 2 works, case 1 > will work too, so why change anything? I don't see why it would be wrong to register multiple regions within the same page. It means that you can catch accesses to unassigned addresses between the regions. There are two instances of Sparc32 DMA controller, one to serve ESP and the other for Lance. These are at addresses dma_base and dma_base + 16. Before subpage, this was handled with a switch, but now we rely on the subpage mechanism instead. -- To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html