On Thu, Jan 23, 2020 at 08:26:39AM -0800, Alexander Duyck wrote: > On Thu, 2020-01-23 at 11:20 +0100, Alexander Graf wrote: > > Hi Alex, > > > > On 22.01.20 18:43, Alexander Duyck wrote: > > > This series provides an asynchronous means of reporting free guest pages > > > to a hypervisor so that the memory associated with those pages can be > > > dropped and reused by other processes and/or guests on the host. Using > > > this it is possible to avoid unnecessary I/O to disk and greatly improve > > > performance in the case of memory overcommit on the host. > > > > > > When enabled we will be performing a scan of free memory every 2 seconds > > > while pages of sufficiently high order are being freed. In each pass at > > > least one sixteenth of each free list will be reported. By doing this we > > > avoid racing against other threads that may be causing a high amount of > > > memory churn. > > > > > > The lowest page order currently scanned when reporting pages is > > > pageblock_order so that this feature will not interfere with the use of > > > Transparent Huge Pages in the case of virtualization. > > > > > > Currently this is only in use by virtio-balloon however there is the hope > > > that at some point in the future other hypervisors might be able to make > > > use of it. In the virtio-balloon/QEMU implementation the hypervisor is > > > currently using MADV_DONTNEED to indicate to the host kernel that the page > > > is currently free. It will be zeroed and faulted back into the guest the > > > next time the page is accessed. > > > > > > To track if a page is reported or not the Uptodate flag was repurposed and > > > used as a Reported flag for Buddy pages. We walk though the free list > > > isolating pages and adding them to the scatterlist until we either > > > encounter the end of the list, processed as many pages as were listed in > > > nr_free prior to us starting, or have filled the scatterlist with pages to > > > be reported. If we fill the scatterlist before we reach the end of the > > > list we rotate the list so that the first unreported page we encounter is > > > moved to the head of the list as that is where we will resume after we > > > have freed the reported pages back into the tail of the list. > > > > > > Below are the results from various benchmarks. I primarily focused on two > > > tests. The first is the will-it-scale/page_fault2 test, and the other is > > > a modified version of will-it-scale/page_fault1 that was enabled to use > > > THP. I did this as it allows for better visibility into different parts > > > of the memory subsystem. The guest is running with 32G for RAM on one > > > node of a E5-2630 v3. The host has had some features such as CPU turbo > > > disabled in the BIOS. > > > > > > Test page_fault1 (THP) page_fault2 > > > Name tasks Process Iter STDEV Process Iter STDEV > > > Baseline 1 1012402.50 0.14% 361855.25 0.81% > > > 16 8827457.25 0.09% 3282347.00 0.34% > > > > > > Patches Applied 1 1007897.00 0.23% 361887.00 0.26% > > > 16 8784741.75 0.39% 3240669.25 0.48% > > > > > > Patches Enabled 1 1010227.50 0.39% 359749.25 0.56% > > > 16 8756219.00 0.24% 3226608.75 0.97% > > > > > > Patches Enabled 1 1050982.00 4.26% 357966.25 0.14% > > > page shuffle 16 8672601.25 0.49% 3223177.75 0.40% > > > > > > Patches enabled 1 1003238.00 0.22% 360211.00 0.22% > > > shuffle w/ RFC 16 8767010.50 0.32% 3199874.00 0.71% > > > > > > The results above are for a baseline with a linux-next-20191219 kernel, > > > that kernel with this patch set applied but page reporting disabled in > > > virtio-balloon, the patches applied and page reporting fully enabled, the > > > patches enabled with page shuffling enabled, and the patches applied with > > > page shuffling enabled and an RFC patch that makes used of MADV_FREE in > > > QEMU. These results include the deviation seen between the average value > > > reported here versus the high and/or low value. I observed that during the > > > test memory usage for the first three tests never dropped whereas with the > > > patches fully enabled the VM would drop to using only a few GB of the > > > host's memory when switching from memhog to page fault tests. > > > > > > Any of the overhead visible with this patch set enabled seems due to page > > > faults caused by accessing the reported pages and the host zeroing the page > > > before giving it back to the guest. This overhead is much more visible when > > > using THP than with standard 4K pages. In addition page shuffling seemed to > > > increase the amount of faults generated due to an increase in memory churn. > > > The overhead is reduced when using MADV_FREE as we can avoid the extra > > > zeroing of the pages when they are reintroduced to the host, as can be seen > > > when the RFC is applied with shuffling enabled. > > > > > > The overall guest size is kept fairly small to only a few GB while the test > > > is running. If the host memory were oversubscribed this patch set should > > > result in a performance improvement as swapping memory in the host can be > > > avoided. > > > > I really like the approach overall. Voluntarily propagating free memory > > from a guest to the host has been a sore point ever since KVM was > > around. This solution looks like a very elegant way to do so. > > > > The big piece I'm missing is the page cache. Linux will by default try > > to keep the free list as small as it can in favor of page cache, so most > > of the benefit of this patch set will be void in real world scenarios. > > Agreed. This is a the next piece of this I plan to work on once this is > accepted. For now the quick and dirty approach is to essentially make use > of the /proc/sys/vm/drop_caches interface in the guest by either putting > it in a cronjob somewhere or to have it after memory intensive workloads. > > > Traditionally, this was solved by creating pressure from the host > > through virtio-balloon: Exactly the piece that this patch set gets away > > with. I never liked "ballooning", because the host has very limited > > visibility into the actual memory utility of its guests. So leaving the > > decision on how much memory is actually needed at a given point in time > > should ideally stay with the guest. > > > > What would keep us from applying the page hinting approach to inactive, > > clean page cache pages? With writeback in place as well, we would slowly > > propagate pages from > > > > dirty -> clean -> clean, inactive -> free -> host owned > > > > which gives a guest a natural path to give up "not important" memory. > > I considered something similar. Basically one thought I had was to > essentially look at putting together some sort of epoch. When the host is > under memory pressure it would need to somehow notify the guest and then > the guest would start moving the epoch forward so that we start evicting > pages out of the page cache when the host is under memory pressure. > > > The big problem I see is that what I really want from a user's point of > > view is a tuneable that says "Automatically free clean page cache pages > > that were not accessed in the last X minutes". Otherwise we may run into > > the risk of evicting some times in use page cache pages. > > > > I have a hard time grasping the mm code to understand how hard that > > would be to implement that though :). > > > > > > Alex > > Yeah, I am not exactly an expert on this either as I have only been > working int he MM tree for about a year now. > > I have submitted this as a topic for LSF/MM summit[1] and I am hoping to > get some feedback on the best way to apply proactive memory pressure as > one of the subtopics if iti s selected. I've been working on a proactive reclaim project that shrinks workloads to their smallest, still healthy, memory footprint. Because we (FB) have a similar problem with containers: in order to know how many workloads can be safely combined on a host, we first need to know how much memory a given workload truly requires - as opposed to how many pages it would gobble up for one-off cache and cold anon regions if it had the whole machine to itself. This userspace tool uses cgroups and psi to adjust the memory limits of workloads in a pressure feedback loop. It targets a minimal rate of refaults/swapping/reclaim activity to identify the point where all the cold pages have been evicted and we're *just* about to start eating into warmer memory. With SSDs, control over pressure is fine-grained enough that we can run it on even highly latency-sensitive things like our web servers without impacting response time meaningfully. It harnesses the VM's existing LRU/clock algorithm to identify the pages which are most likely to be cold, so the approach scales to large memory sizes (256G+) with only minor CPU overhead. https://github.com/facebookincubator/senpai The same concept could be applicable to shrinking guests proactively in virtualized environments?