Thank you for the review! > > + > > +/* acquire per-object lock for access to KASAN metadata. */ > > I believe there's strong reason not to use standard spin_lock() or > similar. I think it's proper place to explain it. > will do. > > +void kasan_meta_lock(struct kasan_alloc_meta *alloc_info) > > +{ > > + union kasan_alloc_data old, new; > > + > > + preempt_disable(); > > It's better to disable and enable preemption inside the loop > on each iteration, to decrease contention. > ok, makes sense; will do. > > + for (;;) { > > + old.packed = READ_ONCE(alloc_info->data); > > + if (unlikely(old.lock)) { > > + cpu_relax(); > > + continue; > > + } > > + new.packed = old.packed; > > + new.lock = 1; > > + if (likely(cmpxchg(&alloc_info->data, old.packed, new.packed) > > + == old.packed)) > > + break; > > + } > > +} > > + > > +/* release lock after a kasan_meta_lock(). */ > > +void kasan_meta_unlock(struct kasan_alloc_meta *alloc_info) > > +{ > > + union kasan_alloc_data alloc_data; > > + > > + alloc_data.packed = READ_ONCE(alloc_info->data); > > + alloc_data.lock = 0; > > + if (unlikely(xchg(&alloc_info->data, alloc_data.packed) != > > + (alloc_data.packed | 0x1U))) > > + WARN_ONCE(1, "%s: lock not held!\n", __func__); > > Nitpick. It never happens in normal case, correct?. Why don't you place it under > some developer config, or even leave at dev branch? The function will > be twice shorter without it. ok, will remove/shorten. > > + alloc_data.packed = alloc_info->data; > > + if (alloc_data.state == KASAN_STATE_ALLOC) { > > + free_info = get_free_info(cache, object); > > + quarantine_put(free_info, cache); > > I just pulled master and didn't find this function. If your patchset > is based on other branch, please notice it. Sorry; patchset is based on linux-next 'next-20160506' which has Alexander Potapenko's patches for KASAN SLAB support with memory quarantine + stackdepot features. > > > + set_track(&free_info->track, GFP_NOWAIT); > > It may fail for many reasons. Is it OK to ignore it? If OK, I think it > should be explained. It's ok. A subsequent bug report on object would have a missing alloc/dealloc stack trace. > > > + kasan_poison_slab_free(cache, object); > > + alloc_data.state = KASAN_STATE_QUARANTINE; > > + alloc_info->data = alloc_data.packed; > > + kasan_meta_unlock(alloc_info); > > + return true; > > } > > + switch (alloc_data.state) { > > + case KASAN_STATE_QUARANTINE: > > + case KASAN_STATE_FREE: > > + kasan_report((unsigned long)object, 0, false, > > + (unsigned long)__builtin_return_address(1)); > > __builtin_return_address() is unsafe if argument is non-zero. Use > return_address() instead. hmm, I/cscope can't seem to find an x86 implementation for return_address(). Will dig further; thanks. > > + local_irq_save(flags); > > + kasan_meta_lock(alloc_info); > > + alloc_data.packed = alloc_info->data; > > + alloc_data.state = KASAN_STATE_ALLOC; > > + alloc_data.size_delta = cache->object_size - size; > > + alloc_info->data = alloc_data.packed; > > set_track(&alloc_info->track, flags); > > Same as above > As above. Kuthonuzo -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href