Re: [PATCH v12 1/8] mm/gup: Introduce unpin_folio/unpin_folios helpers

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 02.04.24 15:52, David Hildenbrand wrote:
On 25.02.24 08:56, Vivek Kasireddy wrote:
These helpers are the folio versions of unpin_user_page/unpin_user_pages.
They are currently only useful for unpinning folios pinned by
memfd_pin_folios() or other associated routines. However, they could
find new uses in the future, when more and more folio-only helpers
are added to GUP.

Cc: David Hildenbrand <david@xxxxxxxxxx>
Cc: Matthew Wilcox <willy@xxxxxxxxxxxxx>
Cc: Christoph Hellwig <hch@xxxxxxxxxxxxx>
Cc: Jason Gunthorpe <jgg@xxxxxxxxxx>
Cc: Peter Xu <peterx@xxxxxxxxxx>
Suggested-by: David Hildenbrand <david@xxxxxxxxxx>
Signed-off-by: Vivek Kasireddy <vivek.kasireddy@xxxxxxxxx>
---
   include/linux/mm.h |  2 ++
   mm/gup.c           | 81 ++++++++++++++++++++++++++++++++++++++++------
   2 files changed, 74 insertions(+), 9 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 6f4825d82965..36e4c2b22600 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1601,11 +1601,13 @@ static inline void put_page(struct page *page)
   #define GUP_PIN_COUNTING_BIAS (1U << 10)
void unpin_user_page(struct page *page);
+void unpin_folio(struct folio *folio);
   void unpin_user_pages_dirty_lock(struct page **pages, unsigned long npages,
   				 bool make_dirty);
   void unpin_user_page_range_dirty_lock(struct page *page, unsigned long npages,
   				      bool make_dirty);
   void unpin_user_pages(struct page **pages, unsigned long npages);
+void unpin_folios(struct folio **folios, unsigned long nfolios);
static inline bool is_cow_mapping(vm_flags_t flags)
   {
diff --git a/mm/gup.c b/mm/gup.c
index df83182ec72d..0a45eda6aaeb 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -30,6 +30,23 @@ struct follow_page_context {
   	unsigned int page_mask;
   };
+static inline void sanity_check_pinned_folios(struct folio **folios,
+					      unsigned long nfolios)
+{
+	if (!IS_ENABLED(CONFIG_DEBUG_VM))
+		return;
+
+	for (; nfolios; nfolios--, folios++) {
+		struct folio *folio = *folios;
+
+		if (is_zero_folio(folio) ||
+		    !folio_test_anon(folio))
+			continue;
+
+		VM_BUG_ON_FOLIO(!PageAnonExclusive(&folio->page), folio);

That change is wrong (and the split makes the check confusing).

It could be that the first subpage is no longer exclusive, but the given
(sanity_check_pinned_pages() ) subpage is exclusive for large folios.

I suggest dropping that change, and instead, in
unpin_folio()/unpin_folios(), reject any anon folios for now.

So, replace the sanity_check_pinned_folios() in unpin_folio() /
unpin_folios() by a VM_WARN_ON(folio_test_anon(folio));

After reading patch #2: drop both the sanity check and VM_WARN_ON() from unpin_folio()/unpin_folios(), and add a comment to the patch description that we cannot do the sanity checking without the subpage, and that we can reintroduce it once we have a single per-folio AnonExclusive bit.

--
Cheers,

David / dhildenb




[Index of Archives]     [Linux DRI Users]     [Linux Intel Graphics]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux