[RFC] mm: gup: add helper page_try_gup_pin(page)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



A helper is added for mitigating the gup issue described at
https://lwn.net/Articles/784574/. It is unsafe to write out
a dirty page that is already gup pinned for DMA.

In the current writeback context, dirty pages are written out with
no detecting whether they have been gup pinned; Nor mark to keep
gupers off. In the gup context, file pages can be pinned with other
gupers and writeback ignored.

The factor, that no room, supposedly even one bit, in the current
page struct can be used for tracking gupers, makes the issue harder
to tackle.

The approach here is, because it makes no sense to allow a file page
to have multiple gupers at the same time, looking to make gupers
mutually exclusive, and then guper's singulairty helps to tell if a
guper is existing by staring at the change in page count.

The result of that sigularity is not yet 100% correct but something
of "best effort" as the effect of random get_page() is perhaps also
folded in it.
It is assumed the best effort is feasible/acceptable in practice
without the the cost of growing the page struct size by one bit,
were it true that something similar has been applied to the page
migrate and reclaim contexts for a while.

With the helper in place, we skip writing out a dirty page if a
guper is detected; On gupping, we give up pinning a file page due
to writeback or losing the race to become a guper.

The end result is, no gup-pinned page will be put under writeback.

It is based on next-20191031.

Signed-off-by: Hillf Danton <hdanton@xxxxxxxx>
---

--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1055,6 +1055,29 @@ static inline void put_page(struct page
 		__put_page(page);
 }
 
+/*
+ * @page must be pagecache page
+ */
+static inline bool page_try_gup_pin(struct page *page)
+{
+	int count;
+
+	page = compound_head(page);
+	count = page_ref_count(page);
+	smp_mb__after_atomic();
+
+	if (!count || count > page_mapcount(page) + 1 +
+				page_has_private(page))
+		return false;
+
+	if (page_ref_inc_return(page) == count + 1) {
+		smp_mb__after_atomic();
+		return true;
+	}
+	put_page(page);
+	return false;
+}
+
 /**
  * put_user_page() - release a gup-pinned page
  * @page:            pointer to page to be released
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -253,7 +253,11 @@ retry:
 	}
 
 	if (flags & FOLL_GET) {
-		if (unlikely(!try_get_page(page))) {
+		if (page_is_file_cache(page)) {
+			if (PageWriteback(page) || !page_try_gup_pin(page))
+				goto pin_fail;
+		} else if (unlikely(!try_get_page(page))) {
+pin_fail:
 			page = ERR_PTR(-ENOMEM);
 			goto out;
 		}
--- a/mm/page-writeback.c
+++ b/mm/page-writeback.c
@@ -2202,6 +2202,9 @@ int write_cache_pages(struct address_spa
 
 			done_index = page->index;
 
+			if (!page_try_gup_pin(page))
+				continue;
+
 			lock_page(page);
 
 			/*
@@ -2215,6 +2218,7 @@ int write_cache_pages(struct address_spa
 			if (unlikely(page->mapping != mapping)) {
 continue_unlock:
 				unlock_page(page);
+				put_page(page);
 				continue;
 			}
 
@@ -2236,6 +2240,11 @@ continue_unlock:
 
 			trace_wbc_writepage(wbc, inode_to_bdi(mapping->host));
 			error = (*writepage)(page, wbc, data);
+			/*
+			 * protection of gup pin is no longer needed after
+			 * putting page under writeback
+			 */
+			put_page(page);
 			if (unlikely(error)) {
 				/*
 				 * Handle errors according to the type of
--





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux