On 11/21/2012 11:54 AM, Yinghai Lu wrote: > On Wed, Nov 21, 2012 at 6:37 AM, Vivek Goyal <vgoyal at redhat.com> wrote: >> On Tue, Nov 20, 2012 at 11:31:38PM -0800, Yinghai Lu wrote: >> >> [..] >>> + /* avoid cross GB boundary */ >>> + align = real_mode->kernel_alignment; >>> + addr = locate_hole(info, size, align, 0x100000, -1, -1); >>> + if (addr == ULONG_MAX) >>> + die("can not load bzImage64"); >>> + /* same GB ? */ >>> + while ((addr >> 30) != ((addr + size - 1) >> 30)) { >>> + addr = locate_hole(info, size, align, 0x100000, >>> + round_down(addr + size - 1, (1UL<<30)), -1); >>> + if (addr == ULONG_MAX) >>> + die("can not load bzImage64"); >>> + } >>> + dbgprintf("Found kernel buffer at %lx size %lx\n", addr, size); >> >> Where does this limitation of not loading kernel across GB boundary come >> from? > > in kernel arch/x86/kernel/head_64.S > > it only set first 1G ident mapping. and if it find that code is above > 1G, it will set extra ident mapping > for new _text.._end. > To make checking and add extra mapping simple and also save two extra > pages for mapping. > Limit that _text.._end in them same GB range. > No, this is backwards. We should fix that limitation instead. -hpa