Re: [PATCH RFC 3/3] slub: reparent memcg caches' slabs on memcg offline

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, May 21, 2014 at 09:41:03AM -0500, Christoph Lameter wrote:
> On Mon, 19 May 2014, Vladimir Davydov wrote:
> 
> > 3) Per cpu partial slabs. We can disable this feature for dead caches by
> > adding appropriate check to kmem_cache_has_cpu_partial.
> 
> There is already a s->cpu_partial number in kmem_cache. If that is zero
> then no partial cpu slabs should be kept.
> 
> > So far, everything looks very simple - it seems we don't have to modify
> > __slab_free at all if we follow the instruction above.
> >
> > However, there is one thing regarding preemptable kernels. The problem
> > is after forbidding the cache store free slabs in per-cpu/node partial
> > lists by setting min_partial=0 and kmem_cache_has_cpu_partial=false
> > (i.e. marking the cache as dead), we have to make sure that all frees
> > that saw the cache as alive are over, otherwise they can occasionally
> > add a free slab to a per-cpu/node partial list *after* the cache was
> > marked dead. For instance,
> 
> Ok then lets switch off preeempt there? Preemption is not supported by
> most distribution and so will have the least impact.

Do I understand you correctly that the following change looks OK to you?

diff --git a/mm/slub.c b/mm/slub.c
index fdf0fe4da9a9..dc3582c2b5bb 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -2676,31 +2676,31 @@ static __always_inline void slab_free(struct kmem_cache *s,
 redo:
 	/*
 	 * Determine the currently cpus per cpu slab.
 	 * The cpu may change afterward. However that does not matter since
 	 * data is retrieved via this pointer. If we are on the same cpu
 	 * during the cmpxchg then the free will succedd.
 	 */
 	preempt_disable();
 	c = this_cpu_ptr(s->cpu_slab);
 
 	tid = c->tid;
-	preempt_enable();
 
 	if (likely(page == c->page)) {
 		set_freepointer(s, object, c->freelist);
 
 		if (unlikely(!this_cpu_cmpxchg_double(
 				s->cpu_slab->freelist, s->cpu_slab->tid,
 				c->freelist, tid,
 				object, next_tid(tid)))) {
 
 			note_cmpxchg_failure("slab_free", s, tid);
 			goto redo;
 		}
 		stat(s, FREE_FASTPATH);
 	} else
 		__slab_free(s, page, x, addr);
 
+	preempt_enable();
 }
 
 void kmem_cache_free(struct kmem_cache *s, void *x)

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxx.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@xxxxxxxxx";> email@xxxxxxxxx </a>




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]