From: John Hubbard <jhubbard@xxxxxxxxxx> Introduces put_user_page(), which simply calls put_page(). This provides a way to update all get_user_pages*() callers, so that they call put_user_page(), instead of put_page(). Also introduces put_user_pages(), and a few dirty/locked variations, as a replacement for release_pages(), for the same reasons. These may be used for subsequent performance improvements, via batching of pages to be released. This prepares for eventually fixing the problem described in [1], and is following a plan listed in [2], [3], [4]. [1] https://lwn.net/Articles/753027/ : "The Trouble with get_user_pages()" [2] https://lkml.kernel.org/r/20180709080554.21931-1-jhubbard@xxxxxxxxxx Proposed steps for fixing get_user_pages() + DMA problems. [3]https://lkml.kernel.org/r/20180710082100.mkdwngdv5kkrcz6n@xxxxxxxxxxxxxx Bounce buffers (otherwise [2] is not really viable). [4] https://lkml.kernel.org/r/20181003162115.GG24030@xxxxxxxxxxxxxx Follow-up discussions. CC: Matthew Wilcox <willy@xxxxxxxxxxxxx> CC: Michal Hocko <mhocko@xxxxxxxxxx> CC: Christopher Lameter <cl@xxxxxxxxx> CC: Jason Gunthorpe <jgg@xxxxxxxx> CC: Dan Williams <dan.j.williams@xxxxxxxxx> CC: Jan Kara <jack@xxxxxxx> CC: Al Viro <viro@xxxxxxxxxxxxxxxxxx> CC: Jerome Glisse <jglisse@xxxxxxxxxx> CC: Christoph Hellwig <hch@xxxxxxxxxxxxx> Signed-off-by: John Hubbard <jhubbard@xxxxxxxxxx> --- include/linux/mm.h | 42 ++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 40 insertions(+), 2 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index a61ebe8ad4ca..1a9aae7c659f 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -137,6 +137,8 @@ extern int overcommit_ratio_handler(struct ctl_table *, int, void __user *, size_t *, loff_t *); extern int overcommit_kbytes_handler(struct ctl_table *, int, void __user *, size_t *, loff_t *); +int set_page_dirty(struct page *page); +int set_page_dirty_lock(struct page *page); #define nth_page(page,n) pfn_to_page(page_to_pfn((page)) + (n)) @@ -943,6 +945,44 @@ static inline void put_page(struct page *page) __put_page(page); } +/* Placeholder version, until all get_user_pages*() callers are updated. */ +static inline void put_user_page(struct page *page) +{ + put_page(page); +} + +/* For get_user_pages*()-pinned pages, use these variants instead of + * release_pages(): + */ +static inline void put_user_pages_dirty(struct page **pages, + unsigned long npages) +{ + while (npages) { + set_page_dirty(pages[npages]); + put_user_page(pages[npages]); + --npages; + } +} + +static inline void put_user_pages_dirty_lock(struct page **pages, + unsigned long npages) +{ + while (npages) { + set_page_dirty_lock(pages[npages]); + put_user_page(pages[npages]); + --npages; + } +} + +static inline void put_user_pages(struct page **pages, + unsigned long npages) +{ + while (npages) { + put_user_page(pages[npages]); + --npages; + } +} + #if defined(CONFIG_SPARSEMEM) && !defined(CONFIG_SPARSEMEM_VMEMMAP) #define SECTION_IN_PAGE_FLAGS #endif @@ -1534,8 +1574,6 @@ int redirty_page_for_writepage(struct writeback_control *wbc, void account_page_dirtied(struct page *page, struct address_space *mapping); void account_page_cleaned(struct page *page, struct address_space *mapping, struct bdi_writeback *wb); -int set_page_dirty(struct page *page); -int set_page_dirty_lock(struct page *page); void __cancel_dirty_page(struct page *page); static inline void cancel_dirty_page(struct page *page) { -- 2.19.0