Le mercredi 20 juillet 2011 à 11:17 -0500, Christoph Lameter a écrit : > On Wed, 20 Jul 2011, Eric Dumazet wrote: > > > Note that adding ____cacheline_aligned_in_smp on nodelists[] actually > > helps performance, as all following fields are readonly after kmem_cache > > setup. > > Well but that is not addresssing the same issue. Could you separate that > out? > I would like this patch not being a performance regression. I know some people really want fast SLAB/SLUB ;) > The other question that follows from this is then: Does that > alignment compensate for the loss of performance due to the additional > lookup in hot code paths and the additional cacheline reference required? > In fact resulting code is smaller, because most fields are now with < 127 offset (x86 assembly code can use shorter instructions) Before patch : # size mm/slab.o text data bss dec hex filename 22605 361665 32 384302 5dd2e mm/slab.o After patch : # size mm/slab.o text data bss dec hex filename 22347 328929 32800 384076 5dc4c mm/slab.o > The per node pointers are lower priority in terms of performance than the > per cpu pointers. I'd rather have the per node pointers requiring an > additional lookup. Less impact on hot code paths. > Sure. I'll post a V2 to have CPU array before NODE array. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/ Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>