On 10/7/19 1:06 PM, Nitesh Narayan Lal wrote: [...] >> So what was the size of your guest? One thing that just occurred to me is >> that you might be running a much smaller guest than I was. > I am running a 30 GB guest. > >>>> If so I would have expected a much higher difference versus >>>> baseline as zeroing/faulting the pages in the host gets expensive fairly >>>> quick. What is the host kernel you are running your test on? I'm just >>>> wondering if there is some additional overhead currently limiting your >>>> setup. My host kernel was just the same kernel I was running in the guest, >>>> just built without the patches applied. >>> Right now I have a different host-kernel. I can install the same kernel to the >>> host as well and see if that changes anything. >> The host kernel will have a fairly significant impact as I recall. For >> example running a stock CentOS kernel lowered the performance compared to >> running a linux-next kernel. As a result the numbers looked better since >> the overall baseline was lower to begin with as the host OS was >> introducing additional overhead. > I see in that case I will try by installing the same guest kernel > to the host as well. As per your suggestion, I tried replacing the host kernel with an upstream kernel without my patches i.e., my host has a kernel built on top of the upstream kernel's master branch which has Sept 23rd commit and the guest has the same kernel for the no-hinting case and same kernel + my patches for the page reporting case. With the changes reported earlier on top of v12, I am not seeing any further degradation (other than what I have previously reported). To be sure that THP is actively used, I did an experiment where I changed the MEMSIZE in the page_fault. On doing so THP usage checked via /proc/meminfo also increased as I expected. In any case, if you find something else please let me know and I will look into it again. I am still looking into your suggestion about cache line bouncing and will reply to it, if I have more questions. [...] -- Thanks Nitesh