Re: [BUG 2.6.31-rc1] HIGHMEM64G causes hang in PCI init on 32-bit x86

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Grant Grundler wrote:
+
+	/* Cap the iomem address space to what is addressable on all CPUs */
+	iomem_resource.end &= (1ULL << c->x86_phys_bits) - 1;

Does x86_phys_bits represent the number of address lines/bits handled by
the memory controller, coming out of the CPU, or handled by the
"north bridge" (IO controller)?


x86_phys_bits represents the top end of what the processor can address.

I was assuming all three are the same thing but that might not be true
with "QPI" or whatever Intel is calling it's serial interconnect these days.
I'm wondering if the addressing capability of the CPU->memory controller
might be different than CPU->IO Controller.

Parallel interconnects are limited by the number of lines wired to
transmit address data and I expect that's where x86_phys_bits originally
came from. Chipsets _were_ all designed around those limits.

Serial interconnects behave the same way, it's just that the address bits are sent in serial order. Something is seriously goofy here, and it's probably reasonably straightforward to figure out what.

	-hpa

--
To unsubscribe from this list: send the line "unsubscribe linux-pci" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [DMA Engine]     [Linux Coverity]     [Linux USB]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [Greybus]

  Powered by Linux