RE: allocating high-memory pages

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




At 06:15 AM 4/13/2005, Arjan van de Ven wrote:
On Wed, 2005-04-13 at 00:30 -0400, Michael R. Hines wrote:
> alloc_pages() only lets me allocate up to 2^9 pages at once, or 2 MB.
>
> Let's take the following situation: I've just booted a machine that
> contains 4 GB of memory on a 32-bit machine.
>
> How would I grab ALL of that memory statically, atomically, and
> contiguously from within the kernel for use?
>
> Is this possible in linux?

no.

you're pretty much stuck with the 2^9 limit; the kernel in fact doesn't
keep memory in bigger chunks. While you could allocate 2 times a 2^9
chunk and hope that they are next to eachother, it's a lot of luck you
are gambling on there...

So, if I did want to grab all (or as much as possible) the memory in the system,
would I end up having to design my own memory "subsystem" of sorts?
.....which either 1. works on top of the buddy system or 2. bypasses it?


Do any such patches or code bases exist? (That you might know of)


/*********************************/ Michael R. Hines Grad Student, Florida State Dept. Computer Science http://www.cs.fsu.edu/~mhines/ Jusqu'à ce que le futur vienne... /*********************************/


-- Kernelnewbies: Help each other learn about the Linux kernel. Archive: http://mail.nl.linux.org/kernelnewbies/ FAQ: http://kernelnewbies.org/faq/



[Index of Archives]     [Newbies FAQ]     [Linux Kernel Mentors]     [Linux Kernel Development]     [IETF Annouce]     [Git]     [Networking]     [Security]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux SCSI]     [Linux ACPI]
  Powered by Linux