On Wed, 2020-01-08 at 02:57 -0500, Nitesh Narayan Lal wrote: > On 1/3/20 4:16 PM, Alexander Duyck wrote: <snip> > > Below are the results from various benchmarks. I primarily focused on two > > tests. The first is the will-it-scale/page_fault2 test, and the other is > > a modified version of will-it-scale/page_fault1 that was enabled to use > > THP. I did this as it allows for better visibility into different parts > > of the memory subsystem. The guest is running with 32G for RAM on one > > node of a E5-2630 v3. The host has had some features such as CPU turbo > > disabled in the BIOS. > > > > Test page_fault1 (THP) page_fault2 > > Name tasks Process Iter STDEV Process Iter STDEV > > Baseline 1 1012402.50 0.14% 361855.25 0.81% > > 16 8827457.25 0.09% 3282347.00 0.34% > > > > Patches Applied 1 1007897.00 0.23% 361887.00 0.26% > > 16 8784741.75 0.39% 3240669.25 0.48% > > > > Patches Enabled 1 1010227.50 0.39% 359749.25 0.56% > > 16 8756219.00 0.24% 3226608.75 0.97% > > > > Patches Enabled 1 1050982.00 4.26% 357966.25 0.14% > > page shuffle 16 8672601.25 0.49% 3223177.75 0.40% > > > > Patches Enabled 1 1003238.00 0.22% 360211.00 0.22% > > shuffle w/ RFC 16 8767010.50 0.32% 3199874.00 0.71% > > Just to be sure that I understand your test setup correctly: > - You have a 32GB guest with a single node affined to a single node of your host > (E5-2630). > - You have THP in both host and the guest enabled and set to 'madvise'. > - On top of the default x86_64 config and other virtio config options you have > CONFIG_SLAB_FREELIST_RANDOM and CONFIG_SHUFFLE_PAGE_ALLOCATOR enabled for the > third observation (Patches Enabled page shuffle). > did I miss anything? So the only things I think you overlooked was that CPU turbo was disbled int eh BIOS. Without that my numbers were much more unpredictable as the CPUs were turboing up and down and me and giving me inconsistent results. Also one thing I forgot to mention is that I had to modify the grub kernel command line to include page_alloc.shuffle=Y so that the page shuffling was actually active. > Can you also remind me of the reason you have skipped recording the number of > threads count reported as part of page_fault tests? Was it because you were > observing different values with every fresh boot? Mainly because the threads test gave me data that was all over the place at higher task counts and because it doesn't scale as well as the processes test case. The averages between the two worked out to be about the same, but the standard deviation was maxing out at 7% for the baseline and 8% for the patches enabled case. However the differences in the averages is still less than 1%. So for example the same data using the threads values for Baseline vs Patches enabled comes out as follows: Baseline 1 1133900.25 0.24% 358395.25 0.30% 16 5848684.75 6.96% 2181989.00 1.69% Patches Enabled 1 1132748.50 0.20% 356615.00 0.11% 16 5796647.00 8.38% 2160475.50 1.84% > > The results above are for a baseline with a linux-next-20191219 kernel, > > that kernel with this patch set applied but page reporting disabled in > > virtio-balloon, the patches applied and page reporting fully enabled, the > > patches enabled with page shuffling enabled, and the patches applied with > > page shuffling enabled and an RFC patch that makes used of MADV_FREE in > > QEMU. These results include the deviation seen between the average value > > reported here versus the high and/or low value. I observed that during the > > test memory usage for the first three tests never dropped whereas with the > > patches fully enabled the VM would drop to using only a few GB of the > > host's memory when switching from memhog to page fault tests. > > Do you mean that in the later case you run the page fault tests after memhog? > If so how much memory do you pass to memhog? For every test I would run memhog 32g in the guest to make sure all memory was allocated at least once before running the page fault tests. I was using that to make certain that the page reporting was working before running the test. That way the baseline gives more consistent results as we don't have to worry about there being any memory the guest has yet to fault in.