On Tue, Jan 9, 2018 at 11:31 AM, Christian König <christian.koenig@xxxxxxx> wrote: >> >> For example, was there a reason for that random 756GB address? Is the >> limit of the particular AMD 64-bit bar perhaps at the 1TB mark (and >> that "res->end" value is because "close to it, but not at the top")? > > That is actually a hardware limit documented in the BIOS and Kernel > developers guide for AMD CPUs > (https://support.amd.com/TechDocs/49125_15h_Models_30h-3Fh_BKDG.pdf). > > I should probably add a comment explaining this. Ok, good. So at least some of those values have reasons. And yes, documenting it would be great. >> A starting point like "halfway to from the hardware limit" would >> actually be a better reason. Or just "we picked an end-point, let's >> pick a starting point that gives us a _sufficient_ - but not excessive >> - window". > > > Well that is exactly what the 256GB patch was doing. Ahh, so 0xbd00000000ull is basically 256GB from the end point, and the end point is basically specified by hardware. So yes, I think that's a starting point that can at least be explained: let's try to make it something "big enough" (and 256GB seems to be big enough). Of course, since this is all "pick a random number", maybe that breaks something _else_. We do traditionally have a very similar issue for the 32-bit PCI starting address, where we used to have tons of issues with "we don't know all resources, we want to try to come up with a range that is out of the way". There we try to find a big enough gap in th e820 memory map (e820_search_gap()), and the end result is pci_mem_start (which is then exposed as PCIBIOS_MIN_MEM to the PCI resource allocation). If worst comes to worst, maybe we should look at having something similar for the full 64-bit range. Linus