The patch titled Subject: mm/page_alloc: add __free_pages() documentation has been removed from the -mm tree. Its filename was mm-page_alloc-add-__free_pages-documentation.patch This patch was dropped because it was merged into mainline or a subsystem tree ------------------------------------------------------ From: "Matthew Wilcox (Oracle)" <willy@xxxxxxxxxxxxx> Subject: mm/page_alloc: add __free_pages() documentation Provide some guidance towards when this might not be the right interface to use. Link: https://lkml.kernel.org/r/20201027025523.3235-1-willy@xxxxxxxxxxxxx Signed-off-by: Matthew Wilcox (Oracle) <willy@xxxxxxxxxxxxx> Reviewed-by: William Kucharski <william.kucharski@xxxxxxxxxx> Acked-by: Vlastimil Babka <vbabka@xxxxxxx> Reviewed-by: Mike Rapoport <rppt@xxxxxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/page_alloc.c | 20 ++++++++++++++++++++ 1 file changed, 20 insertions(+) --- a/mm/page_alloc.c~mm-page_alloc-add-__free_pages-documentation +++ a/mm/page_alloc.c @@ -5043,6 +5043,26 @@ static inline void free_the_page(struct __free_pages_ok(page, order, FPI_NONE); } +/** + * __free_pages - Free pages allocated with alloc_pages(). + * @page: The page pointer returned from alloc_pages(). + * @order: The order of the allocation. + * + * This function can free multi-page allocations that are not compound + * pages. It does not check that the @order passed in matches that of + * the allocation, so it is easy to leak memory. Freeing more memory + * than was allocated will probably emit a warning. + * + * If the last reference to this page is speculative, it will be released + * by put_page() which only frees the first page of a non-compound + * allocation. To prevent the remaining pages from being leaked, we free + * the subsequent pages here. If you want to use the page's reference + * count to decide when to free the allocation, you should allocate a + * compound page, and use put_page() instead of __free_pages(). + * + * Context: May be called in interrupt context or while holding a normal + * spinlock, but not in NMI context or while holding a raw spinlock. + */ void __free_pages(struct page *page, unsigned int order) { if (put_page_testzero(page)) _ Patches currently in -mm which might be from willy@xxxxxxxxxxxxx are mm-make-pagecache-tagged-lookups-return-only-head-pages.patch mm-shmem-use-pagevec_lookup-in-shmem_unlock_mapping.patch mm-swap-optimise-get_shadow_from_swap_cache.patch mm-add-fgp_entry.patch mm-filemap-rename-find_get_entry-to-mapping_get_entry.patch mm-filemap-add-helper-for-finding-pages.patch mm-filemap-add-helper-for-finding-pages-fix.patch mm-filemap-add-mapping_seek_hole_data.patch mm-filemap-add-mapping_seek_hole_data-fix.patch iomap-use-mapping_seek_hole_data.patch mm-add-and-use-find_lock_entries.patch mm-add-and-use-find_lock_entries-fix.patch mm-add-an-end-parameter-to-find_get_entries.patch mm-add-an-end-parameter-to-pagevec_lookup_entries.patch mm-remove-nr_entries-parameter-from-pagevec_lookup_entries.patch mm-pass-pvec-directly-to-find_get_entries.patch mm-remove-pagevec_lookup_entries.patch