Re: logical/virtual addresses and high-memory

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Bahadir Balban wrote:

Thank you very much. Your answer is well appreciated. I see the point
now. Is it then correct to summarize: Kernel and userspace virtual
memory is split to 1gb/3gb regions to share virtual memory space and
avoid tlb flushes upon user/kernel mode switches, whereas the fact
that kernel has 896mb logical space is because mapping the whole
address space requires lots of pte space.

No, the whereas part has nothing to do with things. A single page directory can only map 4G so if you don't want to have to change it when you enter/leave the kernel you are going to have to share it between user and kernel space. In the normal 3G/1G split, the kernel only gets 1G of address-space, reserves 128M for different purposes, leaving only 896M of address-space for a direct mapping of memory.

The "lots of space" thing is space that's needed to describe all pages in the system, and yes, that can grow large, but since it needs to be in the direct mapped portion, you'd only want _more_ of it, not less.

I guess you could still have a 1gb/3gb split, but at the same time, be
able to keep all the memory mapped in page tables, whether or not it
belongs to 1gb kernel or 3gb user space. Is this correct?

I'm not particularly sure what you're asking here. A single set of page-tables (there's 1 per process) does map both user-space for that process and kernel-space -- that's the idea of the split. But given that in that 1G of address-space for the kernel there's just 896M of address-space set aside for a direct map of memory, there's no way you are going to map more than 896M of memory there (directly, permanently).

[ highmem ]

It's also not very fast, but when a TLB flush is the
alternative it doesn't easily get worse.


When does it get worse? Perhaps if you attempt to map a large unmapped
area, it would take more than a single tlb flush?

Highmem overhead could get worse when there's lots of it and/or a great many accesses to it. The address-space where kmap() maps these highmem pages also lives in that 128M region above the direct memory map and is in fact quite a bit more limited than that: in current kernels, only 1024 pages can be mapped at any one time. A process that wants to map some highmem pages may need to relatively often have to wait on another process to free up some first therefore. kmap_atomic() even has only one slot available, per type of use.

Please don't ask me to classify "lots" or "great many". You'd need to ask someone who's done lots of testing on huge boxes.

Do you think high-memory is less maintainable because you wouldn't be
sure whether any page or address you access in the kernel may or may
not be a logical address, i.e. mapped at that moment, and you would
have to cope with this?

Well, because you'd need to cope with large parts of your memory _not_ being mapped. Needing to map them to do anything with it, maybe having to wait for space to do so, and the resulting complexity. Memory management is already inherently complex, and adding more just lets you end up with a system where everyone's afraid to touch anything, since the minute you do something at A something else breaks way over there in corner W. Having all of memory permanently mapped is certainly simpler.

This was another issue I had in mind. When you adjust the virtual
memory area for applications, how does it affect applications already
compiled and linked for a particular address? You could surely mmap
files or use relocatable shared librarys, but perhaps need to
recompile non-relocatable executables?

Well, perhaps, but I doubt there's many applications around that care too much if you lower it slightly. It's easy to write a program which does (just have it acces 3G - 1 and see it fault when that's now kernel space) but normally, program's don't. I saw the -ck patchset warn about VMWARE, which would seem likely the only type of thing to really care.

Finally for such a 1gb machine that you mapped the total physical
memory as logical kernel addresses, would a vmalloc call act like a
kmalloc call, in the sense that it wouldn't need any pagetable
reorganisation but return mapped addresses immediately?

No, the point of vmalloc() is that it gets's you a region contiguous in _virtual_ space, but not (necesarily) in physical. kmalloc() always gets you physically contiguous memory and even with it all mapped, it may still be fragmented enough that you can't get that, same as currently.

Rene.

--
Kernelnewbies: Help each other learn about the Linux kernel.
Archive:       http://mail.nl.linux.org/kernelnewbies/
FAQ:           http://kernelnewbies.org/faq/


[Index of Archives]     [Newbies FAQ]     [Linux Kernel Mentors]     [Linux Kernel Development]     [IETF Annouce]     [Git]     [Networking]     [Security]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux SCSI]     [Linux ACPI]
  Powered by Linux