Re: [PATCH] mm/mremap_pages: Save a few cycles in 'get_dev_pagemap()'

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Le 03/11/2021 à 22:41, Matthew Wilcox a écrit :
On Wed, Nov 03, 2021 at 10:35:34PM +0100, Christophe JAILLET wrote:
Use 'percpu_ref_tryget_live_rcu()' instead of 'percpu_ref_tryget_live()' to
save a few cycles when it is known that the rcu lock is already
taken/released.

If this is really important, we can add an __xa_load() which doesn't
take the RCU read lock.

There are a few:
   rcu_read_lock();
   mem = xa_load(...);
   rcu_read_unlock();
patterns here and there.

I don't have any numbers of if saving some rcu_read_lock/rcu_read_unlock would be useful in these cases.

The only numbers I have are in [1].

[1]: https://lore.kernel.org/linux-kernel/cover.1634822969.git.asml.silence@xxxxxxxxx/

CJ



I honestly think that the xarray is the wrong data structure here,
and we'd be better off with a simple array of (start, pointer)
tuples.

Signed-off-by: Christophe JAILLET <christophe.jaillet@xxxxxxxxxx>
---
  mm/memremap.c | 2 +-
  1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/memremap.c b/mm/memremap.c
index 84de22c14567..012e8d23d365 100644
--- a/mm/memremap.c
+++ b/mm/memremap.c
@@ -506,7 +506,7 @@ struct dev_pagemap *get_dev_pagemap(unsigned long pfn,
  	/* fall back to slow path lookup */
  	rcu_read_lock();
  	pgmap = xa_load(&pgmap_array, PHYS_PFN(phys));
-	if (pgmap && !percpu_ref_tryget_live(pgmap->ref))
+	if (pgmap && !percpu_ref_tryget_live_rcu(pgmap->ref))
  		pgmap = NULL;
  	rcu_read_unlock();
--
2.30.2







[Index of Archives]     [Kernel Development]     [Kernel Announce]     [Kernel Newbies]     [Linux Networking Development]     [Share Photos]     [IDE]     [Security]     [Git]     [Netfilter]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Device Mapper]

  Powered by Linux