Re: [PATCH 01/10] slub: cleanup code for kmem cgroup support to kmem_cache_free_bulk

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Jan 07, 2016 at 03:03:38PM +0100, Jesper Dangaard Brouer wrote:
> This change is primarily an attempt to make it easier to realize the
> optimizations the compiler performs in-case CONFIG_MEMCG_KMEM is not
> enabled.
> 
> Performance wise, even when CONFIG_MEMCG_KMEM is compiled in, the
> overhead is zero. This is because, as long as no process have
> enabled kmem cgroups accounting, the assignment is replaced by
> asm-NOP operations.  This is possible because memcg_kmem_enabled()
> uses a static_key_false() construct.
> 
> It also helps readability as it avoid accessing the p[] array like:
> p[size - 1] which "expose" that the array is processed backwards
> inside helper function build_detached_freelist().

That part is cleande up but overall code doesn't looks readable to me.
How about below change?

Thanks.

---------------------->8------------------
 struct detached_freelist {
+       struct kmem_cache *s;
        struct page *page;
        void *tail;
        void *freelist;
@@ -2852,8 +2853,11 @@ static int build_detached_freelist(struct kmem_cache *s, size_t size,
        if (!object)
                return 0;
 
+       /* Support for memcg */
+       df->s = cache_from_obj(s, object);
+
        /* Start new detached freelist */
-       set_freepointer(s, object, NULL);
+       set_freepointer(df.s, object, NULL);
        df->page = virt_to_head_page(object);
        df->tail = object;
        df->freelist = object;
@@ -2868,7 +2872,7 @@ static int build_detached_freelist(struct kmem_cache *s, size_t size,
                /* df->page is always set at this point */
                if (df->page == virt_to_head_page(object)) {
                        /* Opportunity build freelist */
-                       set_freepointer(s, object, df->freelist);
+                       set_freepointer(df.s, object, df->freelist);
                        df->freelist = object;
                        df->cnt++;
                        p[size] = NULL; /* mark object processed */
@@ -2889,23 +2893,19 @@ static int build_detached_freelist(struct kmem_cache *s, size_t size,
 
 
 /* Note that interrupts must be enabled when calling this function. */
-void kmem_cache_free_bulk(struct kmem_cache *orig_s, size_t size, void **p)
+void kmem_cache_free_bulk(struct kmem_cache *s, size_t size, void **p)
 {
        if (WARN_ON(!size))
                return;
 
        do {
                struct detached_freelist df;
-               struct kmem_cache *s;
-
-               /* Support for memcg */
-               s = cache_from_obj(orig_s, p[size - 1]);
 
                size = build_detached_freelist(s, size, p, &df);
                if (unlikely(!df.page))
                        continue;
 
-               slab_free(s, df.page, df.freelist, df.tail, df.cnt, _RET_IP_);
+               slab_free(df.s, df.page, df.freelist, df.tail, df.cnt, _RET_IP_);
        } while (likely(size));
 }
 EXPORT_SYMBOL(kmem_cache_free_bulk);


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxx.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@xxxxxxxxx";> email@xxxxxxxxx </a>



[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]