Re: [PATCH 2/2] stackdepot: make fast paths lock-less again

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello, stackdepot developers.

I suspect that commit 4434a56ec209 ("stackdepot: make fast paths
lock-less again") is not safe, for
https://syzkaller.appspot.com/x/error.txt?x=1409f29a180000 is reporting
UAF at list_del() below from stack_depot_save_flags().

----------
+       /*
+        * We maintain the invariant that the elements in front are least
+        * recently used, and are therefore more likely to be associated with an
+        * RCU grace period in the past. Consequently it is sufficient to only
+        * check the first entry.
+        */
+       stack = list_first_entry(&free_stacks, struct stack_record, free_list);
+       if (stack->size && !poll_state_synchronize_rcu(stack->rcu_state))
+               return NULL;
+
+       list_del(&stack->free_list);
+       counters[DEPOT_COUNTER_FREELIST_SIZE]--;
----------

Commit 4434a56ec209 says that race is handled by refcount_inc_not_zero(), but
refcount_inc_not_zero() is called only if STACK_DEPOT_FLAG_GET is specified.

----------
+       list_for_each_entry_rcu(stack, bucket, hash_list) {
+               if (stack->hash != hash || stack->size != size)
+                       continue;

-       lockdep_assert_held(&pool_rwlock);
+               /*
+                * This may race with depot_free_stack() accessing the freelist
+                * management state unioned with @entries. The refcount is zero
+                * in that case and the below refcount_inc_not_zero() will fail.
+                */
+               if (data_race(stackdepot_memcmp(entries, stack->entries, size)))
+                       continue;

-       list_for_each(pos, bucket) {
-               found = list_entry(pos, struct stack_record, list);
-               if (found->hash == hash &&
-                   found->size == size &&
-                   !stackdepot_memcmp(entries, found->entries, size))
-                       return found;
+               /*
+                * Try to increment refcount. If this succeeds, the stack record
+                * is valid and has not yet been freed.
+                *
+                * If STACK_DEPOT_FLAG_GET is not used, it is undefined behavior
+                * to then call stack_depot_put() later, and we can assume that
+                * a stack record is never placed back on the freelist.
+                */
+               if ((flags & STACK_DEPOT_FLAG_GET) && !refcount_inc_not_zero(&stack->count))
+                       continue;
+
+               ret = stack;
+               break;
        }
----------

I worried that if we race when STACK_DEPOT_FLAG_GET is not specified,
depot_alloc_stack() by error overwrites stack->free_list via memcpy(stack->entries, ...),
and invalid memory access happens when stack->free_list.next is read.
Therefore, I tried https://syzkaller.appspot.com/text?tag=Patch&x=17a12a30180000
but did not help ( https://syzkaller.appspot.com/x/error.txt?x=1423a4ac180000 ).

Therefore, I started to suspect how stack_depot_save() (which does not set
STACK_DEPOT_FLAG_GET) can be safe. Don't all callers need to set STACK_DEPOT_FLAG_GET
when calling stack_depot_save_flags() and need to call stack_depot_put() ?





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux