Because the SLAB_TYPESAFE_BY_RCU code does not zero pages that are to be broken up into slabs, the memory returned by kmem_cache_alloc() must be fully initialized, including any spinlocks included in the newly allocated structure. This means that readers attempting to look up an SLAB_TYPESAFE_BY_RCU object must use a reference-counting approach. A spinlock may be acquired only after a reference is obtained, which prevents that object from being passed to kmem_struct_free(), but only while that reference continues to be held. Signed-off-by: Paul E. McKenney <paulmck@xxxxxxxxxx> --- Documentation/RCU/whatisRCU.rst | 19 ++++++++++++------- 1 file changed, 12 insertions(+), 7 deletions(-) diff --git a/Documentation/RCU/whatisRCU.rst b/Documentation/RCU/whatisRCU.rst index 6940e0fe8599b..97f2d0fa84dfa 100644 --- a/Documentation/RCU/whatisRCU.rst +++ b/Documentation/RCU/whatisRCU.rst @@ -915,13 +915,18 @@ which an RCU reference is held include: The understanding that RCU provides a reference that only prevents a change of type is particularly visible with objects allocated from a slab cache marked ``SLAB_TYPESAFE_BY_RCU``. RCU operations may yield a -reference to an object from such a cache that has been concurrently -freed and the memory reallocated to a completely different object, -though of the same type. In this case RCU doesn't even protect the -identity of the object from changing, only its type. So the object -found may not be the one expected, but it will be one where it is safe -to take a reference or spinlock and then confirm that the identity -matches the expectations. +reference to an object from such a cache that has been concurrently freed +and the memory reallocated to a completely different object, though of +the same type. In this case RCU doesn't even protect the identity of the +object from changing, only its type. So the object found may not be the +one expected, but it will be one where it is safe to take a reference +(and then potentially acquiring a spinlock), allowing subsequent code +to check whether the identity matches expectations. It is tempting +to simply acquire the spinlock without first taking the reference, but +unfortunately any spinlock in a ``SLAB_TYPESAFE_BY_RCU`` object must be +initialized after each and every call to kmem_cache_alloc(), which renders +reference-free spinlock acquisition completely unsafe. Therefore, when +using ``SLAB_TYPESAFE_BY_RCU``, make proper use of a reference counter. With traditional reference counting -- such as that implemented by the kref library in Linux -- there is typically code that runs when the last -- 2.31.1.189.g2e36527f23