On Wed, 10 Dec 2014 10:30:17 -0600 Christoph Lameter <cl@xxxxxxxxx> wrote: [...] > > Slab Benchmarks on a kernel with CONFIG_PREEMPT show an improvement of > 20%-50% of fastpath latency: > > Before: > > Single thread testing [...] > 2. Kmalloc: alloc/free test [...] > 10000 times kmalloc(256)/kfree -> 116 cycles [...] > > > After: > > Single thread testing [...] > 2. Kmalloc: alloc/free test [...] > 10000 times kmalloc(256)/kfree -> 60 cycles [...] It looks like an impressive saving 116 -> 60 cycles. I just don't see the same kind of improvements with my similar tests[1][2]. My test[1] is just a fast-path loop over kmem_cache_alloc+free on 256bytes objects. (Results after explicitly inlining new func is_pointer_to_page()) baseline: 47 cycles(tsc) 19.032 ns patchset: 45 cycles(tsc) 18.135 ns I do see the improvement, but it is not as high as I would have expected. (CPU E5-2695) [1] https://github.com/netoptimizer/prototype-kernel/blob/master/kernel/lib/time_bench_kmem_cache1.c [2] https://github.com/netoptimizer/prototype-kernel/blob/master/kernel/mm/qmempool_bench.c -- Best regards, Jesper Dangaard Brouer MSc.CS, Sr. Network Kernel Developer at Red Hat Author of http://www.iptv-analyzer.org LinkedIn: http://www.linkedin.com/in/brouer -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>