Re: [PATCH v5 8/9] mm: shmem: split large entry if the swapin folio is not large

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 2024/8/26 06:31, Hugh Dickins wrote:
On Mon, 12 Aug 2024, Baolin Wang wrote:

Now the swap device can only swap-in order 0 folio, even though a large
folio is swapped out. This requires us to split the large entry previously
saved in the shmem pagecache to support the swap in of small folios.

Signed-off-by: Baolin Wang <baolin.wang@xxxxxxxxxxxxxxxxx>
---
  mm/shmem.c | 100 +++++++++++++++++++++++++++++++++++++++++++++++++++++
  1 file changed, 100 insertions(+)

diff --git a/mm/shmem.c b/mm/shmem.c
index 345e25425e37..996062dc196b 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -1990,6 +1990,81 @@ static void shmem_set_folio_swapin_error(struct inode *inode, pgoff_t index,
  	swap_free_nr(swap, nr_pages);
  }
+static int shmem_split_large_entry(struct inode *inode, pgoff_t index,
+				   swp_entry_t swap, int new_order, gfp_t gfp)
+{
+	struct address_space *mapping = inode->i_mapping;
+	XA_STATE_ORDER(xas, &mapping->i_pages, index, new_order);
+	void *alloced_shadow = NULL;
+	int alloced_order = 0, i;

gfp needs to be adjusted: see fix patch below.

Ah, good catch. Thank you Hugh.

+
+	for (;;) {
+		int order = -1, split_order = 0;
+		void *old = NULL;
+
+		xas_lock_irq(&xas);
+		old = xas_load(&xas);
+		if (!xa_is_value(old) || swp_to_radix_entry(swap) != old) {
+			xas_set_err(&xas, -EEXIST);
+			goto unlock;
+		}
+
+		order = xas_get_order(&xas);
+
+		/* Swap entry may have changed before we re-acquire the lock */
+		if (alloced_order &&
+		    (old != alloced_shadow || order != alloced_order)) {
+			xas_destroy(&xas);
+			alloced_order = 0;
+		}
+
+		/* Try to split large swap entry in pagecache */
+		if (order > 0 && order > new_order) {

I have not even attempted to understand all the manipulations of order and
new_order and alloced_order and split_order.  And further down it turns out
that this is only ever called with new_order 0.

You may be wanting to cater for more generality in future, but for now
please cut this down to the new_order 0 case, and omit that parameter.
It will be easier for us to think about the xa_get_order() races if
the possibilities are more limited.

Sure. I will drop the 'new_order' with following fix.


+			if (!alloced_order) {
+				split_order = order;
+				goto unlock;
+			}
+			xas_split(&xas, old, order);
+
+			/*
+			 * Re-set the swap entry after splitting, and the swap
+			 * offset of the original large entry must be continuous.
+			 */
+			for (i = 0; i < 1 << order; i += (1 << new_order)) {
+				pgoff_t aligned_index = round_down(index, 1 << order);
+				swp_entry_t tmp;
+
+				tmp = swp_entry(swp_type(swap), swp_offset(swap) + i);
+				__xa_store(&mapping->i_pages, aligned_index + i,
+					   swp_to_radix_entry(tmp), 0);
+			}

So that is done under xas lock: good. But is the intermediate state
visible to RCU readers, and could that be a problem?

In xas_split(), the multi-index entry has been split into smaller entries, and each of these smaller entries has been set with the old swap value. During the process of __xa_store(), these entries will be re-set to the new swap value. Although RCU readers might observe the old swap value, I have not seen any problems until now (may be I missed something).

For concurrent shmem swap-in cases, there are some checks in shmem_swapin_folio() (including folio->swap.val and shmem_confirm_swap() validation ) to ensure the correctness of the swap values.

For the shmem_partial_swap_usage(), we may get racy swap usages, but it is not a problem form its comments:
" * This is safe to call without i_rwsem or the i_pages lock thanks to RCU,
* as long as the inode doesn't go away and racy results are not a problem."

For shmem truncation, when removing the racy swap entry from shmem page cache, it will use xa_cmpxchg_irq() to sync the correct swap state.


[PATCH] mm: shmem: split large entry if the swapin folio is not large
 fix 2

Now we only split large folio to order 0, so drop the 'new_order'
parameter.

Signed-off-by: Baolin Wang <baolin.wang@xxxxxxxxxxxxxxxxx>
---
 mm/shmem.c | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/mm/shmem.c b/mm/shmem.c
index d8038a66b110..f00b7b99ad09 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -1998,10 +1998,10 @@ static void shmem_set_folio_swapin_error(struct inode *inode, pgoff_t index,
 }

 static int shmem_split_large_entry(struct inode *inode, pgoff_t index,
- swp_entry_t swap, int new_order, gfp_t gfp)
+                                  swp_entry_t swap, gfp_t gfp)
 {
        struct address_space *mapping = inode->i_mapping;
-       XA_STATE_ORDER(xas, &mapping->i_pages, index, new_order);
+       XA_STATE_ORDER(xas, &mapping->i_pages, index, 0);
        void *alloced_shadow = NULL;
        int alloced_order = 0, i;

@@ -2026,7 +2026,7 @@ static int shmem_split_large_entry(struct inode *inode, pgoff_t index,
                }

                /* Try to split large swap entry in pagecache */
-               if (order > 0 && order > new_order) {
+               if (order > 0) {
                        if (!alloced_order) {
                                split_order = order;
                                goto unlock;
@@ -2037,7 +2037,7 @@ static int shmem_split_large_entry(struct inode *inode, pgoff_t index, * Re-set the swap entry after splitting, and the swap * offset of the original large entry must be continuous.
                         */
-                       for (i = 0; i < 1 << order; i += (1 << new_order)) {
+                       for (i = 0; i < 1 << order; i++) {
pgoff_t aligned_index = round_down(index, 1 << order);
                                swp_entry_t tmp;

@@ -2123,7 +2123,7 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index, * should split the large swap entry stored in the pagecache
                 * if necessary.
                 */
- split_order = shmem_split_large_entry(inode, index, swap, 0, gfp); + split_order = shmem_split_large_entry(inode, index, swap, gfp);
                if (split_order < 0) {
                        error = split_order;
                        goto failed;




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux