On Mon, 6 Jan 2014, Dave Hansen wrote: > There used to be only one path out of __slab_alloc(), and > ALLOC_SLOWPATH got bumped in that exit path. Now there are two, > and a bunch of gotos. ALLOC_SLOWPATH can now get set more than once > during a single call to __slab_alloc() which is pretty bogus. > Here's the sequence: > > 1. Enter __slab_alloc(), fall through all the way to the > stat(s, ALLOC_SLOWPATH); > 2. hit 'if (!freelist)', and bump DEACTIVATE_BYPASS, jump to > new_slab (goto #1) > 3. Hit 'if (c->partial)', bump CPU_PARTIAL_ALLOC, goto redo > (goto #2) > 4. Fall through in the same path we did before all the way to > stat(s, ALLOC_SLOWPATH) > 5. bump ALLOC_REFILL stat, then return > > Doing this is obviously bogus. It keeps us from being able to > accurately compare ALLOC_SLOWPATH vs. ALLOC_FASTPATH. It also > means that the total number of allocs always exceeds the total > number of frees. > > This patch moves stat(s, ALLOC_SLOWPATH) to be called from the > same place that __slab_alloc() is. This makes it much less > likely that ALLOC_SLOWPATH will get botched again in the > spaghetti-code inside __slab_alloc(). > > Signed-off-by: Dave Hansen <dave.hansen@xxxxxxxxxxxxxxx> Acked-by: David Rientjes <rientjes@xxxxxxxxxx> -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>