On Tue, Apr 11, 2017 at 7:19 AM, Michal Hocko <mhocko@xxxxxxxxxx> wrote: > I would do something like... > --- > diff --git a/mm/slab.c b/mm/slab.c > index bd63450a9b16..87c99a5e9e18 100644 > --- a/mm/slab.c > +++ b/mm/slab.c > @@ -393,10 +393,15 @@ static inline void set_store_user_dirty(struct kmem_cache *cachep) {} > static int slab_max_order = SLAB_MAX_ORDER_LO; > static bool slab_max_order_set __initdata; > > +static inline struct kmem_cache *page_to_cache(struct page *page) > +{ > + return page->slab_cache; > +} > + > static inline struct kmem_cache *virt_to_cache(const void *obj) > { > struct page *page = virt_to_head_page(obj); > - return page->slab_cache; > + return page_to_cache(page); > } > > static inline void *index_to_obj(struct kmem_cache *cache, struct page *page, > @@ -3813,14 +3818,18 @@ void kfree(const void *objp) > { > struct kmem_cache *c; > unsigned long flags; > + struct page *page; > > trace_kfree(_RET_IP_, objp); > > if (unlikely(ZERO_OR_NULL_PTR(objp))) > return; > + page = virt_to_head_page(obj); > + if (CHECK_DATA_CORRUPTION(!PageSlab(page))) > + return; > local_irq_save(flags); > kfree_debugcheck(objp); > - c = virt_to_cache(objp); > + c = page_to_cache(page); > debug_check_no_locks_freed(objp, c->object_size); > > debug_check_no_obj_freed(objp, c->object_size); Sorry for the delay, I've finally had time to look at this again. So, this only handles the kfree() case, not the kmem_cache_free() nor kmem_cache_free_bulk() cases, so it misses all the non-kmalloc allocations (and kfree() ultimately calls down to kmem_cache_free()). Similarly, my proposed patch missed the kfree() path. :P As I work on a replacement, is the goal to avoid the checks while under local_irq_save()? (i.e. I can't just put the check in virt_to_cache(), etc.) -Kees -- Kees Cook Pixel Security -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>