Re: [PATCH 1/3] mm/slub: fix the race between validate_slab and slab_free

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 7/19/22 10:21 PM, Vlastimil Babka wrote:
On 7/19/22 16:15, Rongwei Wang wrote:

...
+
+    slab_unlock(slab, &flags2);
+    spin_unlock_irqrestore(&n->list_lock, flags);
+    if (!ret)
+        slab_fix(s, "Object at 0x%p not freed", object);
+    if (slab_to_discard) {
+        stat(s, FREE_SLAB);
+        discard_slab(s, slab);
+    }
+
+    return ret;
+}
I had test this patch, and it indeed deal with the bug that I described.

Thanks.

Though I am also has prepared this part of code, your code is ok to me.

Aha, feel free to post your version, maybe it's simpler? We can compare.
My code only includes the part of your free_debug_processing(), the structure of it likes:

slab_free() {
    if (kmem_cache_debug(s))
        slab_free_debug();
    else
        __do_slab_free();
}

The __slab_free_debug() here likes your free_debug_processing().

+/*
+ * Slow path handling for debugging.
+ */
+static void __slab_free_debug(struct kmem_cache *s, struct slab *slab,
+                       void *head, void *tail, int cnt,
+                       unsigned long addr)
+
+{
+       void *prior;
+       int was_frozen;
+       struct slab new;
+       unsigned long counters;
+       struct kmem_cache_node *n = NULL;
+       unsigned long flags;
+       int ret;
+
+       stat(s, FREE_SLOWPATH);
+
+       if (kfence_free(head))
+               return;
+
+       n = get_node(s, slab_nid(slab));
+
+       spin_lock_irqsave(&n->list_lock, flags);
+       ret = free_debug_processing(s, slab, head, tail, cnt, addr);
+       if (!ret) {
+               spin_unlock_irqrestore(&n->list_lock, flags);
+               return;
+       }
+
+       do {
+               prior = slab->freelist;
+               counters = slab->counters;
+               set_freepointer(s, tail, prior);
+               new.counters = counters;
+               was_frozen = new.frozen;
+               new.inuse -= cnt;
+       } while (!cmpxchg_double_slab(s, slab,
+               prior, counters,
+               head, new.counters,
+               "__slab_free"));
+
+       if ((new.inuse && prior) || was_frozen) {
+               spin_unlock_irqrestore(&n->list_lock, flags);
+               if (likely(was_frozen)) {
+                       stat(s, FREE_FROZEN);
+               }
+
+               return;
+       }
+
+       if (!new.inuse && n->nr_partial >= s->min_partial) {
+ /* Indicate no user in this slab, discarding it naturally. */
+               if (prior) {
+                       /* Slab on the partial list. */
+                       remove_partial(n, slab);
+                       stat(s, FREE_REMOVE_PARTIAL);
+               } else {
+                       /* Slab must be on the full list */
+                       remove_full(s, n, slab);
+               }
+
+               spin_unlock_irqrestore(&n->list_lock, flags);
+               stat(s, FREE_SLAB);
+               discard_slab(s, slab);
+               return;
+       }
+
+       /*
+ * Objects left in the slab. If it was not on the partial list before
+        * then add it.
+        */
+       if (!prior) {
+               remove_full(s, n, slab);
+               add_partial(n, slab, DEACTIVATE_TO_TAIL);
+               stat(s, FREE_ADD_PARTIAL);
+       }
+       spin_unlock_irqrestore(&n->list_lock, flags);
+
+       return;
+}








[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux