[PATCH 3/4] rcu/tree: use __rcu_alloc_page_lockless() func.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Use a newly introduced __rcu_alloc_page_lockless() function
directly in the k[v]free_rcu() path, a new pointer array can
be obtained by demand, what reduces a memory footprint, does
it without any delays and in time.

Please note, we still keep the worker approach introduced earlier,
because the lock-less page allocation uses a per-cpu-list cache
that can be depleted, what is absolutely a normal behaviour.

If so, the worker we have, by requesting a new page will also
initiate an internal process that prefetches specified number
of elements from the buddy allocator populating the "pcplist"
by new fresh pages.

A number of pre-fetched elements can be controlled via sysfs
attribute. Please see the /proc/sys/vm/percpu_pagelist_fraction.

Signed-off-by: Uladzislau Rezki (Sony) <urezki@xxxxxxxxx>
---
 kernel/rcu/tree.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index 4bfc46a1e9d1..d51209343029 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -3401,6 +3401,10 @@ kvfree_call_rcu_add_ptr_to_bulk(struct kfree_rcu_cpu *krcp, void *ptr)
 	if (!krcp->bkvhead[idx] ||
 			krcp->bkvhead[idx]->nr_records == KVFREE_BULK_MAX_ENTR) {
 		bnode = get_cached_bnode(krcp);
+		if (!bnode)
+			bnode = (struct kvfree_rcu_bulk_data *)
+				__rcu_alloc_page_lockless();
+
 		/* Switch to emergency path. */
 		if (!bnode)
 			return false;
-- 
2.20.1




[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux