Re: [PATCH] kasan: fix races in quarantine_remove_cache()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Mar 9, 2017 at 11:29 AM, Andrey Ryabinin
<aryabinin@xxxxxxxxxxxxx> wrote:
> On 03/09/2017 12:37 PM, Dmitry Vyukov wrote:
>>>>  void quarantine_reduce(void)
>>>>  {
>>>>       size_t total_size, new_quarantine_size, percpu_quarantines;
>>>>       unsigned long flags;
>>>> +     int srcu_idx;
>>>>       struct qlist_head to_free = QLIST_INIT;
>>>>
>>>>       if (likely(READ_ONCE(quarantine_size) <=
>>>>                  READ_ONCE(quarantine_max_size)))
>>>>               return;
>>>>
>>>> +     /*
>>>> +      * srcu critical section ensures that quarantine_remove_cache()
>>>> +      * will not miss objects belonging to the cache while they are in our
>>>> +      * local to_free list. srcu is chosen because (1) it gives us private
>>>> +      * grace period domain that does not interfere with anything else,
>>>> +      * and (2) it allows synchronize_srcu() to return without waiting
>>>> +      * if there are no pending read critical sections (which is the
>>>> +      * expected case).
>>>> +      */
>>>> +     srcu_idx = srcu_read_lock(&remove_cache_srcu);
>>>
>>> I'm puzzled why is SRCU, why not RCU? Given that we take spin_lock in the next line
>>> we certainly don't need ability to sleep in read-side critical section.
>>
>> I've explained it in the comment above.
>
> I've read it. It doesn't explain to me why is SRCU is better than RCU here.
>  a) We can't sleep in read-side critical section. Given that RCU is almost always
>         faster than SRCU, RCU seem preferable.
>  b) synchronize_rcu() indeed might take longer to complete. But does it matter?
>     We to synchronize_[s]rcu() only on cache destruction which relatively rare operation and
>     it's not a hotpath. Performance of the quarantine_reduce() is more important


As far as I understand container destruction will cause destruction of
a bunch of caches. synchronize_sched() caused serious problems on
these paths in the past, see 86d9f48534e800e4d62cdc1b5aaf539f4c1d47d6.
srcu_read_lock/unlock are not too expensive, that's some atomic
operations on per-cpu variables, so cheaper than the existing
spinlock. And this is already not the fast-fast-path (which is
kmalloc/free). But hundreds of synchronize_rcu in a row can cause
hangup and panic. The fact that it's a rare operation won't help. Also
if we free a substantial batch of objects under rcu lock, it will
affect latency of all rcu callbacks in kernel which can have undesired
effects.
I am trying to make this more predictable and tolerant to unexpected
workloads, rather than sacrifice everything in the name of fast path
performance.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxx.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@xxxxxxxxx";> email@xxxxxxxxx </a>



[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]
  Powered by Linux