Physical memory wierdness!!

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello :),

I am trying to write a module that will visit each process on the
system, grab its mm_struct and then parse through ALL of the process'
VMA's.  

For each VMA, the module will look at the start virtual address and the
end virtual address.  It will then (using the page tables) transform the
two virtual addresses into the physcial PFNs (page frame numbers) where
the actual data lives.  The module will then dump the spanned page frame
range to proc via the seq_file interface.

Here is a small chunk of output from this module.  My questions are:

1) Notice that for process 1, the VMA's PFN ranges start at a higher
number than they end.  For process 970 its exactly the opposite.  is
this normal??  Would the kernel map memory in such a way that the
virtual addresses count UP but the corresponding physical addresses
count DOWN?

2)  Is it possible that a vma is not contiguous in phsyical memory?  If
so, when does this happen, is it often? And, is there any easy way
(without comparing all addresses in the VMA) to  discover this?

3) Why is it that 'kmap 1' shows me that process 1 is only using 1456k
of memory, but the 2nd VMA in processes 1 is (623 pages * 4096 bytes) =
abut 2.5 Megabytes?  If I add up all VMA's they are significantly larger
than what the output for pmap says that this process is using for
memory.


the output for 'pmap 1' is this:

08048000     28K r-x--  /init
0804f000      4K rw---  /init
08050000    132K rw---    [ anon ]
b7eb6000      4K rw---    [ anon ]
b7eb7000   1080K r-x--  /libc-2.3.4.so
b7fc5000      4K -----  /libc-2.3.4.so
b7fc6000      4K r----  /libc-2.3.4.so
b7fc7000     12K rw---  /libc-2.3.4.so
b7fca000      8K rw---    [ anon ]
b7feb000     84K r-x--  /ld-2.3.4.so
b8000000      8K rw---  /ld-2.3.4.so
bffeb000     84K rw---    [ stack ]
ffffe000      4K -----    [ anon ]
 total     1456K

------------------------
MODULE OUTPUT
----------------------

PROCESS 1
vma: pfn_range [393027 - 392994] size=33
vma: pfn_range [392994 - 392371] size=623
# We found a not present pte
vma: pfn_range [392826 - 392736] size=90
# We found a not present pte
# We found a not present pte
vma: pfn_range [392825 - 392831] size=6
vma: pfn_range [392831 - 392820] size=11
# We found a not present pte
vma: pfn_range [393097 - 393026] size=71
# We found a not present pte
# We found a not present pte
PROCESS 970
vma: pfn_range [391789 - 391870] size=81
vma: pfn_range [391870 - 392600] size=730
# We found a not present pte
vma: pfn_range [391855 - 392736] size=881
# We found a not present pte
# We found a not present pte
vma: pfn_range [391792 - 391854] size=62
vma: pfn_range [391854 - 391795] size=59
# We found a not present pte
vma: pfn_range [393097 - 391850] size=1247
# We found a not present pte
# We found a not present pte


-- 
Jason J. Herne <hernejj@xxxxxxxxxxxx>


--
Kernelnewbies: Help each other learn about the Linux kernel.
Archive:       http://mail.nl.linux.org/kernelnewbies/
FAQ:           http://kernelnewbies.org/faq/


[Index of Archives]     [Newbies FAQ]     [Linux Kernel Mentors]     [Linux Kernel Development]     [IETF Annouce]     [Git]     [Networking]     [Security]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux SCSI]     [Linux ACPI]
  Powered by Linux