Query regarding memory accesses and page faults in virtual machine using kvm and qemu

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello everyone,
I am using kvm and VM OS is ubuntu 14.04. Host hardware is intel core i5, running ubuntu 14.04 OS.
Background:I am trying to track the number of pages accessed by a VM in certain period of time. To do this, I am setting the pte flags of all pages present and which are not protected (i.e., PROTNONE != 1) on the host virtual address space belonging to the VM process as NOT PRESENT AND PROTECTED (by clearing PRESENT bit and setting PROTNONE bit in the pte entry).
That is, for pages belonging to the VM process, if they are present and not protected, mark their pte entries to reflect that they are not present but protected.
The idea is that whenever this page is accessed, there has to be a fault because the present bit is cleared. Now, inside the fault handler handle_pte_fault, I registered a kernel module function, which will reset the pte bits for the faulted address. [Resetting means making PRESENT = 1 and PROTNONE = 0]
I am considering the VM as any other normal process on the host.
Current Scenario:I am testing the above by running a sample test program inside the VM which does the following.
1) Mallocs 200 MB which is 51200 pages.
2) Accesses all the pages to make sure that they are brought into memory.
3) sleep for say 10 secs to allow tracker to mark the pages
4) access all the pages [which is expected to generate faults]
Ran this test process couple of times.
My tracker is runs on host.
Issue: My tracker module reports very few pages as accessed which is in turn because of very few faults say around 350 faults for 51200 which are expected. Also I am doing a tlb_flush_all in host to avoid direct translations after the pages are marked and before they are accessed.
Note: I disabled ksm and huge pages features.
I tested the tracker on a process running on the host which malloced 900 MB and accesses all the pages, and it reflects the pages dirtied as the number of pages dirtied. But the issue happens only in the case of tracking VMs.
Queries:1) Are all the tlb entries flushed when tlb_flush_all is done or are the tlb entries of VM tagged separately and this is causing an issue?
2) Can anyone please guide as to what could be going wrong? Or any comments on the procedure that could go wrong would help. Any references would also help as I am new to this qemu, kvm memory management area.
Thanks,
Ravali
_______________________________________________
Kernelnewbies mailing list
Kernelnewbies@xxxxxxxxxxxxxxxxx
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies

[Index of Archives]     [Newbies FAQ]     [Linux Kernel Mentors]     [Linux Kernel Development]     [IETF Annouce]     [Git]     [Networking]     [Security]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux SCSI]     [Linux ACPI]
  Powered by Linux