Re: Common10 [06/20] Extract a common function for kmem_cache_destroy

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> Index: linux-2.6/mm/slab_common.c
> ===================================================================
> --- linux-2.6.orig/mm/slab_common.c     2012-08-02 14:21:12.797779926 -0500
> +++ linux-2.6/mm/slab_common.c  2012-08-02 14:21:17.301860675 -0500
> @@ -130,6 +130,31 @@
>  }
>  EXPORT_SYMBOL(kmem_cache_create);
>
> +void kmem_cache_destroy(struct kmem_cache *s)
> +{
> +       get_online_cpus();
> +       mutex_lock(&slab_mutex);
> +       s->refcount--;
> +       if (!s->refcount) {
> +               list_del(&s->list);
> +
> +               if (!__kmem_cache_shutdown(s)) {
> +                       if (s->flags & SLAB_DESTROY_BY_RCU)
> +                               rcu_barrier();
> +
> +                       __kmem_cache_destroy(s);
> +               } else {
> +                       list_add(&s->list, &slab_caches);
> +                       printk(KERN_ERR "kmem_cache_destroy %s: Slab cache still has objects\n",
> +                               s->name);
> +                       dump_stack();
> +               }
> +       }
> +       mutex_unlock(&slab_mutex);
> +       put_online_cpus();
> +}
> +EXPORT_SYMBOL(kmem_cache_destroy);

This common code diverts behavior of slub when objects is remained.
Before patch, regardless of number of remaining objects, kmem_cache is
always destroyed.
After patch, when objects is remained, kmem_cache is also remained.
This is problematic behavior as kmem_cache_close() already free
per-cpu structure.
If we reuse this kmem_cache, we may encounter NULL pointer dereference.

I suggest following modification.
I thinks it is sufficient to prevent above mentioned case.

diff --git a/mm/slub.c b/mm/slub.c
index cfe4abb..7f26b39 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -3184,7 +3184,6 @@ static inline int kmem_cache_close(struct kmem_cache *s)
        int node;

        flush_all(s);
-       free_percpu(s->cpu_slab);
        /* Attempt to free all objects */
        for_each_node_state(node, N_NORMAL_MEMORY) {
                struct kmem_cache_node *n = get_node(s, node);
@@ -3193,6 +3192,7 @@ static inline int kmem_cache_close(struct kmem_cache *s)
                if (n->nr_partial || slabs_node(s, node))
                        return 1;
        }
+       free_percpu(s->cpu_slab);
        free_kmem_cache_nodes(s);
        return 0;
 }

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxx.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@xxxxxxxxx";> email@xxxxxxxxx </a>


[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]