listRCU.rst document gives an example with 'ipc_lock()', but the function has dropped off by commit 82061c57ce93 ("ipc: drop ipc_lock()"). Because the main logic of 'ipc_lock()' has melded in 'shm_lock()' by the commit, this commit updates the document to use 'shm_lock()' instead. Reviewed-by: Madhuparna Bhowmik <madhuparnabhowmik04@xxxxxxxxx> Signed-off-by: SeongJae Park <sjpark@xxxxxxxxx> --- Documentation/RCU/listRCU.rst | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/Documentation/RCU/listRCU.rst b/Documentation/RCU/listRCU.rst index e768f56e8fa3..2a643e293fb4 100644 --- a/Documentation/RCU/listRCU.rst +++ b/Documentation/RCU/listRCU.rst @@ -286,11 +286,11 @@ time the external state changes before Linux becomes aware of the change, additional RCU-induced staleness is generally not a problem. However, there are many examples where stale data cannot be tolerated. -One example in the Linux kernel is the System V IPC (see the ipc_lock() -function in ipc/util.c). This code checks a *deleted* flag under a +One example in the Linux kernel is the System V IPC (see the shm_lock() +function in ipc/shm.c). This code checks a *deleted* flag under a per-entry spinlock, and, if the *deleted* flag is set, pretends that the entry does not exist. For this to be helpful, the search function must -return holding the per-entry lock, as ipc_lock() does in fact do. +return holding the per-entry spinlock, as shm_lock() does in fact do. .. _quick_quiz: -- 2.17.1