On Sat, Jan 14, 2017 at 10:19:21AM -0500, Tejun Heo wrote: > Hello, Vladimir. > > On Sat, Jan 14, 2017 at 04:19:39PM +0300, Vladimir Davydov wrote: > > On Sat, Jan 14, 2017 at 12:54:42AM -0500, Tejun Heo wrote: > > > This patch updates the cache release path so that it simply uses > > > call_rcu() instead of the synchronous rcu_barrier() + custom batching. > > > This doesn't cost more while being logically simpler and way more > > > scalable. > > > > The point of rcu_barrier() is to wait until all rcu calls freeing slabs > > from the cache being destroyed are over (rcu_free_slab, kmem_rcu_free). > > I'm not sure if call_rcu() guarantees that for all rcu implementations > > too. If it did, why would we need rcu_barrier() at all? > > Yeah, I had a similar question and scanned its users briefly. Looks > like it's used in combination with ctors so that its users can > opportunistically dereference objects and e.g. check ids / state / > whatever without worrying about the objects' lifetimes. Hello, Tejun. Long time no see! :) IIUC, rcu_barrier() here prevents to destruct the kmem_cache until all slab pages in it are freed. These slab pages are freed through call_rcu(). Your patch changes it to another call_rcu() and, I think, if sequence of executing rcu callbacks is the same with sequence of adding rcu callbacks, it would work. However, I'm not sure that it is guaranteed by RCU API. Am I missing something? Thanks. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>