>alloc_pages() only lets me allocate up to 2^9 pages at once, or 2 MB.
That's because of the design of buddy system.
>Let's take the following situation: I've just booted a machine that
>contains 4 GB of memory on a 32-bit machine.
>How would I grab ALL of that memory statically, atomically, and
>contiguously from within the kernel for use?
>Is this possible in linux?
Not sure about this one, during boot, the kernel will
create the free list for all the free pages right?
Once this is done, any kernel component can grap
a page(s) by calling the page allocator functions, so I don't think
your idea of grabbing all the free pages is going to work.
I was afraid of that.
, but even if you succeeded in grabbing all
the memory, what will happen to the rest of system
which require free pages?
I'm writing a remote-memory system. (Well, it's already been
written and performance-tested for NORMAL-zone memory.)
In short, the remote nodes that are storing memory pages (for client
machines) need to be able to acquire as much high memory as possible.
That remote node (server) is dedicated and shouldn't be running any
processes that are memory-bound, so I'm not to worried about the
rest of the system needing free pages.
/*********************************/
Michael R. Hines
Grad Student, Florida State
Dept. Computer Science
http://www.cs.fsu.edu/~mhines/
Jusqu'à ce que le futur vienne...
/*********************************/
--
Kernelnewbies: Help each other learn about the Linux kernel.
Archive: http://mail.nl.linux.org/kernelnewbies/
FAQ: http://kernelnewbies.org/faq/