On Wed, Jul 24, 2019 at 09:47:24PM +0200, David Hildenbrand wrote: > On 24.07.19 21:31, Michael S. Tsirkin wrote: > > On Wed, Jul 24, 2019 at 08:41:33PM +0200, David Hildenbrand wrote: > >> On 24.07.19 20:40, Nitesh Narayan Lal wrote: > >>> > >>> On 7/24/19 12:54 PM, Alexander Duyck wrote: > >>>> This series provides an asynchronous means of hinting to a hypervisor > >>>> that a guest page is no longer in use and can have the data associated > >>>> with it dropped. To do this I have implemented functionality that allows > >>>> for what I am referring to as page hinting > >>>> > >>>> The functionality for this is fairly simple. When enabled it will allocate > >>>> statistics to track the number of hinted pages in a given free area. When > >>>> the number of free pages exceeds this value plus a high water value, > >>>> currently 32, > >>> Shouldn't we configure this to a lower number such as 16? > >>>> it will begin performing page hinting which consists of > >>>> pulling pages off of free list and placing them into a scatter list. The > >>>> scatterlist is then given to the page hinting device and it will perform > >>>> the required action to make the pages "hinted", in the case of > >>>> virtio-balloon this results in the pages being madvised as MADV_DONTNEED > >>>> and as such they are forced out of the guest. After this they are placed > >>>> back on the free list, and an additional bit is added if they are not > >>>> merged indicating that they are a hinted buddy page instead of a standard > >>>> buddy page. The cycle then repeats with additional non-hinted pages being > >>>> pulled until the free areas all consist of hinted pages. > >>>> > >>>> I am leaving a number of things hard-coded such as limiting the lowest > >>>> order processed to PAGEBLOCK_ORDER, > >>> Have you considered making this option configurable at the compile time? > >>>> and have left it up to the guest to > >>>> determine what the limit is on how many pages it wants to allocate to > >>>> process the hints. > >>> It might make sense to set the number of pages to be hinted at a time from the > >>> hypervisor. > >>>> > >>>> My primary testing has just been to verify the memory is being freed after > >>>> allocation by running memhog 79g on a 80g guest and watching the total > >>>> free memory via /proc/meminfo on the host. With this I have verified most > >>>> of the memory is freed after each iteration. As far as performance I have > >>>> been mainly focusing on the will-it-scale/page_fault1 test running with > >>>> 16 vcpus. With that I have seen at most a 2% difference between the base > >>>> kernel without these patches and the patches with virtio-balloon disabled. > >>>> With the patches and virtio-balloon enabled with hinting the results > >>>> largely depend on the host kernel. On a 3.10 RHEL kernel I saw up to a 2% > >>>> drop in performance as I approached 16 threads, > >>> I think this is acceptable. > >>>> however on the the lastest > >>>> linux-next kernel I saw roughly a 4% to 5% improvement in performance for > >>>> all tests with 8 or more threads. > >>> Do you mean that with your patches the will-it-scale/page_fault1 numbers were > >>> better by 4-5% over an unmodified kernel? > >>>> I believe the difference seen is due to > >>>> the overhead for faulting pages back into the guest and zeroing of memory. > >>> It may also make sense to test these patches with netperf to observe how much > >>> performance drop it is introducing. > >>>> Patch 4 is a bit on the large side at about 600 lines of change, however > >>>> I really didn't see a good way to break it up since each piece feeds into > >>>> the next. So I couldn't add the statistics by themselves as it didn't > >>>> really make sense to add them without something that will either read or > >>>> increment/decrement them, or add the Hinted state without something that > >>>> would set/unset it. As such I just ended up adding the entire thing as > >>>> one patch. It makes it a bit bigger but avoids the issues in the previous > >>>> set where I was referencing things before they had been added. > >>>> > >>>> Changes from the RFC: > >>>> https://lore.kernel.org/lkml/20190530215223.13974.22445.stgit@localhost.localdomain/ > >>>> Moved aeration requested flag out of aerator and into zone->flags. > >>>> Moved bounary out of free_area and into local variables for aeration. > >>>> Moved aeration cycle out of interrupt and into workqueue. > >>>> Left nr_free as total pages instead of splitting it between raw and aerated. > >>>> Combined size and physical address values in virtio ring into one 64b value. > >>>> > >>>> Changes from v1: > >>>> https://lore.kernel.org/lkml/20190619222922.1231.27432.stgit@localhost.localdomain/ > >>>> Dropped "waste page treatment" in favor of "page hinting" > >>> We may still have to try and find a better name for virtio-balloon side changes. > >>> As "FREE_PAGE_HINT" and "PAGE_HINTING" are still confusing. > >> > >> We should have named that free page reporting, but that train already > >> has left. > > > > I think VIRTIO_BALLOON_F_FREE_PAGE_HINT is different and arguably > > actually does provide hints. > > I guess it depends on the point of view (e.g., getting all free pages > feels more like a report). But I could also live with using the term > reporting in this context. > > We could go ahead and name it all "page reporting", would also work for me. So there are actually three differences between the machanisms: 1. VIRTIO_BALLOON_F_FREE_PAGE_HINT sometimes reports pages which might no longer be on the free list (with subtle limitations which sometimes still allow hypervisor to discard the pages) 2. VIRTIO_BALLOON_F_FREE_PAGE_HINT starts reporting when requested by host 3. VIRTIO_BALLOON_F_FREE_PAGE_HINT is not incremental: each request by host reports all free memory By comparison, the proposed patches: - always report only actually free pages - report at a random time - report incrementally > -- > > Thanks, > > David / dhildenb