On Fri, Jun 26, 2009 at 10:31:49PM -0700, H. Peter Anvin wrote: > Grant Grundler wrote: >>>> + >>>> + /* Cap the iomem address space to what is addressable on all CPUs */ >>>> + iomem_resource.end &= (1ULL << c->x86_phys_bits) - 1; >> >> Does x86_phys_bits represent the number of address lines/bits handled by >> the memory controller, coming out of the CPU, or handled by the >> "north bridge" (IO controller)? >> > > x86_phys_bits represents the top end of what the processor can address. Ok - I interpret that to mean the number of physical address bits the processor can deal with regardless of the memory controller or IO Controller. >> I was assuming all three are the same thing but that might not be true >> with "QPI" or whatever Intel is calling it's serial interconnect these days. >> I'm wondering if the addressing capability of the CPU->memory controller >> might be different than CPU->IO Controller. >> >> Parallel interconnects are limited by the number of lines wired to >> transmit address data and I expect that's where x86_phys_bits originally >> came from. Chipsets _were_ all designed around those limits. > > Serial interconnects behave the same way, it's just that the address > bits are sent in serial order. The bits going across the wire are the protocol for the interconnect and not necessarily what the CPU has implemented. > Something is seriously goofy here, and > it's probably reasonably straightforward to figure out what. Ok - just tracking it down will be difficult...really need to know the state of the machine (MCE dump?) when it wedges and can then determine which code is trying to access the range. thanks, grant -- To unsubscribe from this list: send the line "unsubscribe linux-pci" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html