[PATCH v3 0/4] mm/slub: Fix count_partial() problem

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



count_partial() can hold n->list_lock spinlock for quite long, which
makes much trouble to the system. This series eliminate this problem.

v1->v2:
- Improved changelog and variable naming for PATCH 1~2.
- PATCH3 adds per-cpu counter to avoid performance regression
  in concurrent __slab_free().

v2->v3:
- Changed "page->inuse" to the safe "new.inuse", etc.
- Used CONFIG_SLUB_DEBUG and CONFIG_SYSFS condition for new counters.
- atomic_long_t -> unsigned long

[Testing]
There seems might be a little performance impact under extreme
__slab_free() concurrent calls according to my tests.

On my 32-cpu 2-socket physical machine:
Intel(R) Xeon(R) CPU E5-2650 v2 @ 2.60GHz

1) perf stat --null --repeat 10 -- hackbench 20 thread 20000

== original, no patched
Performance counter stats for 'hackbench 20 thread 20000' (10 runs):

      24.536050899 seconds time elapsed                                          ( +-  0.24% )


Performance counter stats for 'hackbench 20 thread 20000' (10 runs):

      24.588049142 seconds time elapsed                                          ( +-  0.35% )


== patched with patch1~4
Performance counter stats for 'hackbench 20 thread 20000' (10 runs):

      24.670892273 seconds time elapsed                                          ( +-  0.29% )


Performance counter stats for 'hackbench 20 thread 20000' (10 runs):

      24.746755689 seconds time elapsed                                          ( +-  0.21% )


2) perf stat --null --repeat 10 -- hackbench 32 thread 20000

== original, no patched
 Performance counter stats for 'hackbench 32 thread 20000' (10 runs):

      39.784911855 seconds time elapsed                                          ( +-  0.14% )

 Performance counter stats for 'hackbench 32 thread 20000' (10 runs):

      39.868687608 seconds time elapsed                                          ( +-  0.19% )

== patched with patch1~4
 Performance counter stats for 'hackbench 32 thread 20000' (10 runs):

      39.681273015 seconds time elapsed                                          ( +-  0.21% )

 Performance counter stats for 'hackbench 32 thread 20000' (10 runs):

      39.681238459 seconds time elapsed                                          ( +-  0.09% )


Xunlei Pang (4):
  mm/slub: Introduce two counters for partial objects
  mm/slub: Get rid of count_partial()
  percpu: Export per_cpu_sum()
  mm/slub: Use percpu partial free counter

 include/linux/percpu-defs.h   |  10 ++++
 kernel/locking/percpu-rwsem.c |  10 ----
 mm/slab.h                     |   4 ++
 mm/slub.c                     | 120 +++++++++++++++++++++++++++++-------------
 4 files changed, 97 insertions(+), 47 deletions(-)

-- 
1.8.3.1





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux