On 1/7/19 9:06 PM, Qian Cai wrote: > > > On 1/7/19 5:43 AM, Catalin Marinas wrote: >> On Thu, Jan 03, 2019 at 06:07:35PM +0100, Michal Hocko wrote: >>>>> On Wed 02-01-19 13:06:19, Qian Cai wrote: >>>>> [...] >>>>>> diff --git a/mm/kmemleak.c b/mm/kmemleak.c >>>>>> index f9d9dc250428..9e1aa3b7df75 100644 >>>>>> --- a/mm/kmemleak.c >>>>>> +++ b/mm/kmemleak.c >>>>>> @@ -576,6 +576,16 @@ static struct kmemleak_object *create_object(unsigned long ptr, size_t size, >>>>>> struct rb_node **link, *rb_parent; >>>>>> >>>>>> object = kmem_cache_alloc(object_cache, gfp_kmemleak_mask(gfp)); >>>>>> +#ifdef CONFIG_PREEMPT_COUNT >>>>>> + if (!object) { >>>>>> + /* last-ditch effort in a low-memory situation */ >>>>>> + if (irqs_disabled() || is_idle_task(current) || in_atomic()) >>>>>> + gfp = GFP_ATOMIC; >>>>>> + else >>>>>> + gfp = gfp_kmemleak_mask(gfp) | __GFP_DIRECT_RECLAIM; >>>>>> + object = kmem_cache_alloc(object_cache, gfp); >>>>>> + } >>>>>> +#endif >> [...] >>> I will not object to this workaround but I strongly believe that >>> kmemleak should rethink the metadata allocation strategy to be really >>> robust. >> >> This would be nice indeed and it was discussed last year. I just haven't >> got around to trying anything yet: >> >> https://marc.info/?l=linux-mm&m=152812489819532 >> > > It could be helpful to apply this 10-line patch first if has no fundamental > issue, as it survives probably 50 times running LTP oom* workloads without a > single kmemleak allocation failure. > > Of course, if someone is going to embed kmemleak metadata into slab objects > itself soon, this workaround is not needed. > Well, it is really hard to tell even if someone get eventually redesign kmemleak to embed the metadata into slab objects alone would survive LTP oom* workloads, because it seems still use separate metadata for non-slab objects where kmemleak allocation could fail like it right now and disable itself again.