Dear KVM Developers: I have some questions about how KVM hypervisor requests and allocate physical pages to the VM. I am using kernel version 3.2.14. I run a microbenchmark in the VM, which declares an array with certain size and then assigns some value to all the elements in the array, which causes page fault captured by the hypervisor and allocates the machine physical pages. Depending on the array size, I got two different trace results. (1) When array size == 10MB or 20 MB, the trace file has the page allocation events for the qemu-kvm process as follows qemu-kvm-14402 [004] 1912063.581683: kmalloc_node: call_site=ffffffff813df54a ptr=ffff880779a42800 bytes_req=576 bytes_alloc=1024 gfp_flags=GFP_KERNEL|GFP_REPEAT node=-1 qemu-kvm-14402 [004] 1912063.581701: kmem_cache_alloc_node: call_site=ffffffff813e302c ptr=ffff8800229ff500 bytes_req=256 bytes_alloc=256 gfp_flags=GFP_KERNEL|GFP_REPEAT node=-1 qemu-kvm-14402 [004] 1912063.581701: kmalloc_node: call_site=ffffffff813df54a ptr=ffff880779a42800 bytes_req=576 bytes_alloc=1024 gfp_flags=GFP_KERNEL|GFP_REPEAT node=-1 qemu-kvm-14402 [004] 1912063.581710: kmem_cache_alloc_node: call_site=ffffffff813e302c ptr=ffff8800229ff800 bytes_req=256 bytes_alloc=256 gfp_flags=GFP_KERNEL|GFP_REPEAT node=-1 qemu-kvm-14402 [004] 1912063.581710: kmalloc_node: call_site=ffffffff813df54a ptr=ffff880779a47800 bytes_req=640 bytes_alloc=1024 gfp_flags=GFP_KERNEL|GFP_REPEAT node=-1 qemu-kvm-14402 [004] 1912063.581728: kmem_cache_alloc_node: call_site=ffffffff813e302c ptr=ffff8800229ffe00 bytes_req=256 bytes_alloc=256 gfp_flags=GFP_KERNEL|GFP_REPEAT ... ... (2) When array size == 40MB, the trace file has the page allocation events as qemu-kvm-14450 [005] 1911006.440538: mm_page_alloc: page=ffffea0002570f40 pfn=613437 order=0 migratetype=2 gfp_flags=GFP_HIGHUSER_MOVABLE|GFP_ZERO qemu-kvm-14450 [005] 1911006.440542: mm_page_alloc: page=ffffea0002577480 pfn=613842 order=0 migratetype=2 gfp_flags=GFP_HIGHUSER_MOVABLE|GFP_ZERO qemu-kvm-14450 [005] 1911006.440545: mm_page_alloc: page=ffffea000070a3c0 pfn=115343 order=0 migratetype=2 gfp_flags=GFP_HIGHUSER_MOVABLE|GFP_ZERO qemu-kvm-14450 [005] 1911006.440549: mm_page_alloc: page=ffffea0001a9a500 pfn=435860 order=0 migratetype=2 gfp_flags=GFP_HIGHUSER_MOVABLE|GFP_ZERO qemu-kvm-14450 [005] 1911006.440552: mm_page_alloc: page=ffffea00016f7e80 pfn=376314 order=0 migratetype=2 gfp_flags=GFP_HIGHUSER_MOVABLE|GFP_ZERO qemu-kvm-14450 [005] 1911006.440556: mm_page_alloc: page=ffffea0001a9d700 pfn=436060 order=0 migratetype=2 gfp_flags=GFP_HIGHUSER_MOVABLE|GFP_ZERO qemu-kvm-14450 [005] 1911006.440559: mm_page_alloc: page=ffffea0002576880 pfn=613794 order=0 migratetype=2 gfp_flags=GFP_HIGHUSER_MOVABLE|GFP_ZERO qemu-kvm-14450 [005] 1911006.440563: mm_page_alloc: page=ffffea00016f7b40 pfn=376301 order=0 migratetype=2 gfp_flags=GFP_HIGHUSER_MOVABLE|GFP_ZERO qemu-kvm-14450 [005] 1911006.440569: mm_page_alloc: page=ffffea00016f7f80 pfn=376318 order=0 migratetype=2 gfp_flags=GFP_HIGHUSER_MOVABLE|GFP_ZER .... ---------------------------------------------------------------------------------------------- When size = 10MB and 20MB, it looks like that KVM use kmem_cache_alloc_node and kmalloc_node to allocate physical pages. However, when size = 40MB, KVM hypervisor uses mm_page_alloc to allocator physical pages. The former is based on the slab allocator, while the latter is directly from the buddy allocator. So what is the heuristic used by the KVM to determine when to use the slab allocator or directly from the buddy allocator? Or is there anything wrong with my trace file? Thanks in advance. - Hui -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html