On Mon, Mar 25, 2019 at 02:15:25PM +0000, Christopher Lameter wrote: > On Fri, 22 Mar 2019, Matthew Wilcox wrote: > > > On Fri, Mar 22, 2019 at 07:39:31PM +0000, Christopher Lameter wrote: > > > On Fri, 22 Mar 2019, Waiman Long wrote: > > > > > > > > > > > > >> I am looking forward to it. > > > > > There is also alrady rcu being used in these paths. kfree_rcu() would not > > > > > be enough? It is an estalished mechanism that is mature and well > > > > > understood. > > > > > > > > > In this case, the memory objects are from kmem caches, so they can't > > > > freed using kfree_rcu(). > > > > > > Oh they can. kfree() can free memory from any slab cache. > > > > Only for SLAB and SLUB. SLOB requires that you pass a pointer to the > > slab cache; it has no way to look up the slab cache from the object. > > Well then we could either fix SLOB to conform to the others or add a > kmem_cache_free_rcu() variant. The problem with a kmem_cache_free_rcu() variant is that we now have three pointers to store -- the object pointer, the slab pointer and the rcu next pointer. I spent some time looking at how SLOB might be fixed, and I didn't come up with a good idea. Currently SLOB stores the size of the object in the four bytes before the object, unless the object is "allocated from a slab cache", in which case the size is taken from the cache pointer instead. So calling kfree() on a pointer allocated using kmem_cache_alloc() will cause corruption as it attempts to determine the length of the object. Options: 1. Dispense with this optimisation and always store the size of the object before the object. 2. Add a kmem_cache flag that says whether objects in this cache may be freed with kfree(). Only dispense with this optimisation for slabs with this flag set. 3. Change SLOB to segregate objects by size. If someone has gone to the trouble of creating a kmem_cache, this is a pretty good hint that there will be a lot of objects of this size allocated, so this could help SLOB fight fragmentation. Any other bright ideas?