On Thu, May 12, 2022 at 01:45:05PM -0700, Andrew Morton wrote: > > The patch titled > Subject: mm-fix-is_pinnable_page-against-on-cma-page-v5 > has been added to the -mm mm-unstable branch. Its filename is > mm-fix-is_pinnable_page-against-on-cma-page-v5.patch > > This patch will shortly appear at > https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/mm-fix-is_pinnable_page-against-on-cma-page-v5.patch > > This patch will later appear in the mm-unstable branch at > git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm > > Before you just go and hit "reply", please: > a) Consider who else should be cc'ed > b) Prefer to cc a suitable mailing list as well > c) Ideally: find the original patch on the mailing list and do a > reply-to-all to that, adding suitable additional cc's > > *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** > > The -mm tree is included into linux-next via the mm-everything > branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm > and is updated there every 2-3 working days > > ------------------------------------------------------ > From: Minchan Kim <minchan@xxxxxxxxxx> > Subject: mm-fix-is_pinnable_page-against-on-cma-page-v5 > > * clarification why we need READ_ONCE - Paul > * Add a comment about READ_ONCE - John > > Link: https://lkml.kernel.org/r/20220512204143.3961150-1-minchan@xxxxxxxxxx > Signed-off-by: Minchan Kim <minchan@xxxxxxxxxx> > Cc: "Paul E . McKenney" <paulmck@xxxxxxxxxx> > Cc: John Hubbard <jhubbard@xxxxxxxxxx> > Cc: David Hildenbrand <david@xxxxxxxxxx> > Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> > --- > > include/linux/mm.h | 9 +++++---- > 1 file changed, 5 insertions(+), 4 deletions(-) > > --- a/include/linux/mm.h~mm-fix-is_pinnable_page-against-on-cma-page-v5 > +++ a/include/linux/mm.h > @@ -1627,13 +1627,14 @@ static inline bool is_pinnable_page(stru > { > #ifdef CONFIG_CMA > /* > - * use volatile to use local variable mt instead of > - * refetching mt value. > + * Defend against future compiler LTO features, or code refactoring > + * that inlines the above function, by forcing a single read. Because, > + * this routine races with set_pageblock_migratetype(), and we want to > + * avoid reading zero, when actually one or the other flags was set. > */ > - int __mt = get_pageblock_migratetype(page); It makes build failure. Could you pick this up instead? https://lore.kernel.org/all/Yn10GkInyZNtqASa@xxxxxxxxxx/ Sorry for confusing.