Thanks for the reply! I was talking about the following paragraph from the reference you provided :
The kernel (on the x86 architecture, in the default configuration) splits the 4-GB virtual address space between user-space and the kernel; the same set of mappings is used in both contexts. A typical split dedicates 3 GB to user space, and 1 GB for kernel space.* The kernel’s code and data structures must fit into that space, but the biggest consumer of kernel address space is virtual mappings for physical memory. The
kernel cannot directly manipulate memory that is not mapped into the kernel’s address space. The kernel, in other words, needs its own virtual address for any memory it must touch directly. Thus, for many years, the maximum amount of physical memory that could be handled by the kernel was the amount that could be mapped into the kernel’s portion of the virtual address space, minus the space needed for the kernel code itself. As a result, x86-based Linux systems could work with a maximum of a little under 1 GB of physical memory.
I am still not clear about the sentences in bold. Why is the space needed for kernel code subtracted from the amount that could be mapped
into kernel's portion of virtual address space ? Also , what difference does the fact - "biggest consumer of kernel address space is virtual mappings for physical memory" make in the amount of memory that can be handled by the kernel.
I am little confused.
Thanks
Vaibhav Jain
On Mon, Jul 25, 2011 at 5:51 PM, Dave Hylands <dhylands@xxxxxxxxx> wrote:
into kernel's portion of virtual address space ? Also , what difference does the fact - "biggest consumer of kernel address space is virtual mappings for physical memory" make in the amount of memory that can be handled by the kernel.
I am little confused.
Thanks
Vaibhav Jain
On Mon, Jul 25, 2011 at 5:51 PM, Dave Hylands <dhylands@xxxxxxxxx> wrote:
Hi Vaibhav,
My numbers/comments are for the ARM processor, the x86 may be slightly
On Mon, Jul 25, 2011 at 3:17 PM, Vaibhav Jain <vjoss197@xxxxxxxxx> wrote:
> Hi,
>
> I read a few articles on linux virtual memory management such as this one :
> http://lwn.net/Articles/75174/
>
> which say that earlier linux kernel could only use memory slightly below 1
> GB. They have
> given the reason for it but I am unable to understand.They further describe
> the use of High memory and low memory.
> Could anybody please explain the reason for kernel not being able to use the
> 1 GB completely?
> Also please provide references for high memory and low memory.
different.
The typical configuration for the kernel has addresses from 0x00000000
through 0xC0000000 given to user space (it's actually a small amount
less than 3 GB since modules are loaded in the space just before
0xC0000000).
That leaves 0xC0000000 to 0xFFFFFFFF for kernel virtual memory (or 1
Gb). Now devices need some I/O space, which takes away from the 1Gb.
I think what you're calling low memory is kernel logical memory. See
http://lwn.net/images/pdf/LDD3/ch15.pdf on page 414 (not the 414th
page of the PDF, but the page with 414 printed on the bottom).
High memory is memory which is not directly accessible by the kernel.
You need to use kmap/kunmap to map the memory into the kernel virtual
memory space. Low memory is always accessible by the kernel.
So user-mode programs get allocated from high-memory (if high memory
exists) since the kernel doesn't typically need to access the
user-space memory.
It is possible to set some CONFIG options and have the 3Gb/1Gb split
changed to 2Gb/2Gb or 1Gb/3Gb, but 3Gb/1Gb is the normal default.
--
Dave Hylands
Shuswap, BC, Canada
http://www.davehylands.com
_______________________________________________ Kernelnewbies mailing list Kernelnewbies@xxxxxxxxxxxxxxxxx http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies