Re: maximum kernel addres space and its implication to user space memory allocation

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Mulyadi Santosa wrote:

Dear Joe Knapka
[snip]

As we all know, using 3G/1G VM split (3G for user space, 1G for kernel space), Linux kernel can only do the identity mapping on the first 896 MB of available RAM and the rest is considered as highmem.

Now let's assume we have a PC with 1GB RAM and we disable highmem support in currently running Linux kernel. Also assume there is still around 900 MB free RAM available to be used (it's currently consumed by kernel code and data, also several user space daemons) and there is no swap partition mounted at all. When a task is keep allocating memory (and the page frames are really allocated, so it's not just a matter of extending the size of existing VMA region such as heap) and it reaches 896 MB mark, can it keep asking for more RAM?
Merely allocting memory (eg via malloc() or brk()) is NOT what causes physical RAM to be allocated. Basically, RAM is allocated to user tasks only during page-fault
processing.

A user task does not ask for RAM; it allocates memory in its VIRTUAL address space. It's true that when all physical RAM pages are in use, the kernel cannot allocate any more physical RAM, but it does not follow that user task requests for virtual address space will fail. The process of allocation involves ONLY extending a VMA; no physical page is required until the task actually accesses an address in the newly-allocated
region.

The virtual address requested by a user task is either mapped into RAM (in which case the usual TLB stuff happens and the task proceeeds as usual); or it is NOT mapped, in which case the task experiences a page fault and sleeps pending the page being mapped into RAM. (In the case of newly-allocated VM, almost certainly the page is NOT mapped.) The kernel's VM sytem then sets about clearing out some space so that the requested page can be mapped. That process could succeed or fail, depending on the available swap space, and probably other things. But usually the kernel will be able to evict some other page from RAM and allocate the freed physical page to the process that
got the page fault.

My understanding so far is: user space memory allocation is satisfied by using free pages from ZONE_NORMAL pool. Since ZONE_NORMAL is ranging from 16 MB to 896 MB, once it is all used, kernel memory allocator will start looking in ZONE_HIGHMEM. But since we disable highmem support, ZONE_HIGHMEM isn't exist. Therefore, the kernel can't satisfy user space request beyond 896 MB, thus it failed. My other understanding is, 128 MB of RAM is simply wasted since it can't be addressed.
I'm pretty sure that space will be used by the boot-time fixed mappings (grep the kernel
source for "fixmap"), but I could be wrong.

-- JK



--
Kernelnewbies: Help each other learn about the Linux kernel.
Archive:       http://mail.nl.linux.org/kernelnewbies/
FAQ:           http://kernelnewbies.org/faq/


[Index of Archives]     [Newbies FAQ]     [Linux Kernel Mentors]     [Linux Kernel Development]     [IETF Annouce]     [Git]     [Networking]     [Security]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux SCSI]     [Linux ACPI]
  Powered by Linux