This patch adds regular_page_mapped and huge_page_mapped. regular_page_mapped is increased when a page of the smallest granularity is mapped. This is usually a 4k, 16k or 64k page. huge_page_mapped is increased when a huge page of any size other than the smallest granularity is mapped. Those counters only count pages allocated for the data and doesn't count the pages/blocks allocated to the page tables as I don't see where those might be needed to be recorded I can see two usecases for those counters : We can detect memory pressure in the host when the guest gets regular pages instead of huge ones. May help detecting an abnormal memory usage like some recurring allocs past the kernel and a few program starts. With the previous patch about stage2_abort_exit, it have the added benefit of specifying the second main cause of stage 2 page fault (the other being mmio access) To test this patch I did start a guest VM and monitor the page allocation. By default it only allocate huge pages. Then I tried to disable the huge pages with : echo never > /sys/kernel/mm/transparent_hugepage/enabled Starting the VM, it no longer allocate any huge page, but only regular pages. I can't log into the guess because it doesn't recognize my keyboard. To circumvent that I added some command to the init script that need some memory : cat /dev/zero | head -c 1000m | tail This take 1GiB of memory before finishing. >From memory, it allocate 525 or so huge table which is around what I would expect with 2MB pages. I did check the relation between stage 2 exits, mmio exits and allocation. The mmio + allocation account for almost all the stage 2 exit as expected. There was only about 20 exits that was neither a mmio or an alloc during the kernel boot. I did not look what they are, but it can be a memory permission relaxation, or resizing a page. My main concern here is about the case where we replace a page/block by another/resize a block. I don't fully understand the mechanism yet and so don't know if it should be counted as an allocation or not. For now I don't account it. Signed-off-by: Yoan Picchi <yoan.picchi@xxxxxxx> --- arch/arm64/include/asm/kvm_host.h | 2 ++ arch/arm64/kvm/guest.c | 2 ++ arch/arm64/kvm/hyp/pgtable.c | 5 +++++ 3 files changed, 9 insertions(+) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 02891ce94..8f9d27571 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -547,6 +547,8 @@ static inline bool __vcpu_write_sys_reg_to_cpu(u64 val, int reg) struct kvm_vm_stat { ulong remote_tlb_flush; + ulong regular_page_mapped; + ulong huge_page_mapped; }; struct kvm_vcpu_stat { diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c index 82a4b6275..41316b30e 100644 --- a/arch/arm64/kvm/guest.c +++ b/arch/arm64/kvm/guest.c @@ -42,6 +42,8 @@ struct kvm_stats_debugfs_item debugfs_entries[] = { VCPU_STAT("exits", exits), VCPU_STAT("halt_poll_success_ns", halt_poll_success_ns), VCPU_STAT("halt_poll_fail_ns", halt_poll_fail_ns), + VM_STAT("regular_page_mapped", regular_page_mapped), + VM_STAT("huge_page_mapped", huge_page_mapped), { NULL } }; diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index 4d177ce1d..2aba2b636 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -498,6 +498,11 @@ static int stage2_map_walker_try_leaf(u64 addr, u64 end, u32 level, smp_store_release(ptep, new); get_page(page); data->phys += granule; + if (level == KVM_PGTABLE_MAX_LEVELS - 1) + data->mmu->kvm->stat.regular_page_mapped++; + else + data->mmu->kvm->stat.huge_page_mapped++; + return 0; } -- 2.17.1 _______________________________________________ kvmarm mailing list kvmarm@xxxxxxxxxxxxxxxxxxxxx https://lists.cs.columbia.edu/mailman/listinfo/kvmarm