On Sun, Dec 29, 2013 at 01:49:48PM +0200, Pekka Enberg wrote: >On Sat, Dec 28, 2013 at 3:50 AM, Li Zefan <lizefan@xxxxxxxxxx> wrote: >> On 2013/12/27 17:46, Wanpeng Li wrote: >>> SLUB per cpu partial cache is a list of slab caches to accelerate objects >>> allocation. However, current codes just accumulate the objects number of >>> the first slab cache of per cpu partial cache instead of traverse the whole >>> list. >>> >>> Signed-off-by: Wanpeng Li <liwanp@xxxxxxxxxxxxxxxxxx> >>> --- >>> mm/slub.c | 32 +++++++++++++++++++++++--------- >>> 1 files changed, 23 insertions(+), 9 deletions(-) >>> >>> diff --git a/mm/slub.c b/mm/slub.c >>> index 545a170..799bfdc 100644 >>> --- a/mm/slub.c >>> +++ b/mm/slub.c >>> @@ -4280,7 +4280,7 @@ static ssize_t show_slab_objects(struct kmem_cache *s, >>> struct kmem_cache_cpu *c = per_cpu_ptr(s->cpu_slab, >>> cpu); >>> int node; >>> - struct page *page; >>> + struct page *page, *p; >>> >>> page = ACCESS_ONCE(c->page); >>> if (!page) >>> @@ -4298,8 +4298,9 @@ static ssize_t show_slab_objects(struct kmem_cache *s, >>> nodes[node] += x; >>> >>> page = ACCESS_ONCE(c->partial); >>> - if (page) { >>> - x = page->pobjects; >>> + while ((p = page)) { >>> + page = p->next; >>> + x = p->pobjects; >>> total += x; >>> nodes[node] += x; >>> } >> >> Can we apply this patch first? It was sent month ago, but Pekka was not responsive. > >Applied. Wanpeng, care to resend your patch? Zefan's patch is good enough, mine doesn't need any more. Regards, Wanpeng Li -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>