On Fri, 30 Mar 2018 16:38:52 -0700 Joel Fernandes <joelaf@xxxxxxxxxx> wrote: > > --- a/kernel/trace/ring_buffer.c > > +++ b/kernel/trace/ring_buffer.c > > @@ -1164,6 +1164,11 @@ static int __rb_allocate_pages(long nr_pages, struct list_head *pages, int cpu) > > struct buffer_page *bpage, *tmp; > > long i; > > > > + /* Check if the available memory is there first */ > > + i = si_mem_available(); > > + if (i < nr_pages) > > Does it make sense to add a small margin here so that after ftrace > finishes allocating, we still have some memory left for the system? > But then then we have to define a magic number :-| I don't think so. The memory is allocated by user defined numbers. They can do "free" to see what is available. The original patch from Zhaoyang was due to a script that would just try a very large number and cause issues. If the memory is available, I just say let them have it. This is borderline user space issue and not a kernel one. > > + > > I tested in Qemu with 1GB memory, I am always able to get it to fail > allocation even without this patch without causing an OOM. Maybe I am > not running enough allocations in parallel or something :) Try just echoing in "1000000" into buffer_size_kb and see what happens. > > The patch you shared using si_mem_available is working since I'm able > to allocate till the end without a page allocation failure: > > bash-4.3# echo 237800 > /d/tracing/buffer_size_kb > bash: echo: write error: Cannot allocate memory > bash-4.3# echo 237700 > /d/tracing/buffer_size_kb > bash-4.3# free -m > total used free shared buffers > Mem: 985 977 7 10 0 > -/+ buffers: 977 7 > Swap: 0 0 0 > bash-4.3# > > I think this patch is still good to have, since IMO we should not go > and get page allocation failure (even if its a non-OOM) and subsequent > stack dump from mm's allocator, if we can avoid it. > > Tested-by: Joel Fernandes <joelaf@xxxxxxxxxx> Great thanks! I'll make it into a formal patch. -- Steve