The patch titled Subject: mm: page_cache_add_speculative(): refactor out some code duplication has been added to the -mm tree. Its filename is mm-page_cache_add_speculative-refactor-out-some-code-duplication.patch This patch should soon appear at http://ozlabs.org/~akpm/mmots/broken-out/mm-page_cache_add_speculative-refactor-out-some-code-duplication.patch and later at http://ozlabs.org/~akpm/mmotm/broken-out/mm-page_cache_add_speculative-refactor-out-some-code-duplication.patch Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next and is updated there every 3-4 working days ------------------------------------------------------ From: john.hubbard@xxxxxxxxx Subject: mm: page_cache_add_speculative(): refactor out some code duplication This combines the common elements of these routines: page_cache_get_speculative() page_cache_add_speculative() This was anticipated by the original author, as shown by the comment in commit ce0ad7f095258 ("powerpc/mm: Lockless get_user_pages_fast() for 64-bit (v3)"): "Same as above, but add instead of inc (could just be merged)" There is no intention to introduce any behavioral change, but there is a small risk of that, due to slightly differing ways of expressing the TINY_RCU and related configurations. This also removes the VM_BUG_ON(in_interrupt()) that was in page_cache_add_speculative(), but not in page_cache_get_speculative(). This provides slightly less detection of such bugs, but it given that it was only there on the "add" path anyway, we can likely do without it just fine. And it removes the VM_BUG_ON_PAGE(PageCompound(page) && page != compound_head(page), page); that page_cache_add_speculative() had. Link: http://lkml.kernel.org/r/20190206231016.22734-2-jhubbard@xxxxxxxxxx Signed-off-by: John Hubbard <jhubbard@xxxxxxxxxx> Reviewed-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> Cc: Benjamin Herrenschmidt <benh@xxxxxxxxxxxxxxxxxxx> Cc: Dave Kleikamp <shaggy@xxxxxxxxxxxxxxxxxx> Cc: Hugh Dickins <hughd@xxxxxxxxxx> Cc: Jeff Layton <jlayton@xxxxxxxxxx> Cc: Matthew Wilcox <willy@xxxxxxxxxxxxx> Cc: Nicholas Piggin <npiggin@xxxxxxxxx> Cc: Vlastimil Babka <vbabka@xxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- --- a/include/linux/pagemap.h~mm-page_cache_add_speculative-refactor-out-some-code-duplication +++ a/include/linux/pagemap.h @@ -164,7 +164,7 @@ void release_pages(struct page **pages, * will find the page or it will not. Likewise, the old find_get_page could run * either before the insertion or afterwards, depending on timing. */ -static inline int page_cache_get_speculative(struct page *page) +static inline int __page_cache_add_speculative(struct page *page, int count) { #ifdef CONFIG_TINY_RCU # ifdef CONFIG_PREEMPT_COUNT @@ -180,10 +180,10 @@ static inline int page_cache_get_specula * SMP requires. */ VM_BUG_ON_PAGE(page_count(page) == 0, page); - page_ref_inc(page); + page_ref_add(page, count); #else - if (unlikely(!get_page_unless_zero(page))) { + if (unlikely(!page_ref_add_unless(page, count, 0))) { /* * Either the page has been freed, or will be freed. * In either case, retry here and the caller should @@ -197,27 +197,14 @@ static inline int page_cache_get_specula return 1; } -/* - * Same as above, but add instead of inc (could just be merged) - */ -static inline int page_cache_add_speculative(struct page *page, int count) +static inline int page_cache_get_speculative(struct page *page) { - VM_BUG_ON(in_interrupt()); - -#if !defined(CONFIG_SMP) && defined(CONFIG_TREE_RCU) -# ifdef CONFIG_PREEMPT_COUNT - VM_BUG_ON(!in_atomic() && !irqs_disabled()); -# endif - VM_BUG_ON_PAGE(page_count(page) == 0, page); - page_ref_add(page, count); - -#else - if (unlikely(!page_ref_add_unless(page, count, 0))) - return 0; -#endif - VM_BUG_ON_PAGE(PageCompound(page) && page != compound_head(page), page); + return __page_cache_add_speculative(page, 1); +} - return 1; +static inline int page_cache_add_speculative(struct page *page, int count) +{ + return __page_cache_add_speculative(page, count); } #ifdef CONFIG_NUMA _ Patches currently in -mm which might be from jhubbard@xxxxxxxxxx are mm-page_cache_add_speculative-refactor-out-some-code-duplication.patch