On Thu, 9 Apr 2015, Andrew Morton wrote: > > This is going to increase as we add more capabilities. I have a second > > patch here that extends the fast allocation to the per cpu partial pages. > > Yes, but what is the expected success rate of the initial bulk > allocation attempt? If it's 1% then perhaps there's no point in doing > it. After we have extracted object from all structures aorund we can also go directly to the page allocator if we wanted and bypass lots of the processing for metadata. So we will ultimately end up with 100% success rate. > > > This kmem_cache_cpu.tid logic is a bit opaque. The low-level > > > operations seem reasonably well documented but I couldn't find anywhere > > > which tells me how it all actually works - what is "disambiguation > > > during cmpxchg" and how do we achieve it? > > > > This is used to force a retry in slab_alloc_node() if preemption occurs > > there. We are modifying the per cpu state thus a retry must be forced. > > No, I'm not referring to this patch. I'm referring to the overall > design concept behind kmem_cache_cpu.tid. This patch made me go and > look, and it's a bit of a head-scratcher. It's unobvious and doesn't > appear to be documented in any central place. Perhaps it's in a > changelog, but who has time for that? The tid logic is documented somewhat in mm/slub.c. Line 1749 and following. > Keeping them in -next is not a problem - I was wondering about when to > start moving the code into mainline. When Mr. Brouer has confirmed that the stuff actually does some good for his issue. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>