Christoph Lameter a écrit :
On Thu, 27 Nov 2008, Eric Dumazet wrote:
The last point is about SLUB being hit hard, unless we
use slub_min_order=3 at boot, or we use Christoph Lameter
patch (struct file RCU optimizations)
http://thread.gmane.org/gmane.linux.kernel/418615
If we boot machine with slub_min_order=3, SLUB overhead disappears.
I'd rather not be that drastic. Did you try increasing slub_min_objects
instead? Try 40-100. If we find the right number then we should update
the tuning to make sure that it pickes the right slab page sizes.
4096/192 = 21
with slub_min_objects=22 :
# cat /sys/kernel/slab/filp/order
1
# time ./socket8
real 0m1.725s
user 0m0.685s
sys 0m12.955s
with slub_min_objects=45 :
# cat /sys/kernel/slab/filp/order
2
# time ./socket8
real 0m1.652s
user 0m0.694s
sys 0m12.367s
with slub_min_objects=80 :
# cat /sys/kernel/slab/filp/order
3
# time ./socket8
real 0m1.642s
user 0m0.719s
sys 0m12.315s
I would say slub_min_objects=45 is the optimal value on 32bit arches to
get acceptable performance on this workload (order=2 for filp kmem_cache)
Note : SLAB here is disastrous, but you already knew that :)
real 0m8.128s
user 0m0.748s
sys 1m3.467s
--
To unsubscribe from this list: send the line "unsubscribe kernel-testers" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html