[PATCH] makedumpfile: change the wrong code to calculate bufsize_cyclic for elf dump

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, 18 Apr 2014 22:29:12 +0800
"bhe at redhat.com" <bhe at redhat.com> wrote:

> 
> > >> It definitely will cause OOM. On my test machine, it has 100G memory. So
> > >> per old code, its needed_size is 3200K*2 == 6.4M, if currently free
> > >> memory is only 15M left, the free_size will be 15M*0.4 which is 6M. So
> > >> info->bufsize_cyclic is assigned to be 6M. and only 3M is left for other
> > >> use, e.g page cache, dynamic allocation. OOM will happen.
> > >>
> > >
> > >BTW, in our case, there's about 30M free memory when we started saving
> > >dump. It should be caused by my coarse estimation above.
> > 
> > Thanks for your description, I understand that situation and
> > the nature of the problem.
> > 
> > That is, the assumption that 20% of free memory is enough for
> > makedumpfile can be broken if free memory is too small.
> > If your machine has 200GB memory, OOM will happen even after fix
> > the too allocation bug.
> 
> Well, we have done some experiments to try to get the statistical memory
> range which kdump really need. Then a final reservation will be
> calculated automatically as (base_value + linear growth of total memory). 
> If one machine has 200GB memory, its reservation will grow too. Since
> except of the bitmap cost, other memory cost is almost fixed. 
> 
> Per this scheme things should be go well, if memory always goes to the
> edge of OOM, an adjust of base_value is needed. So a constant value as
> you said may not be needed.
> 
> Instead, I am wondering how the 80% comes from, and why 20% of free
> memory must be safe.

I believe these 80% come from the default value of vm.dirty_ratio,
which is 20%. In other words, the kernel won't block further writes
until 20% of available RAM is used up by dirty cache. But if you
fill up all free memory with dirty pages and then touch another (though
allocated) page, the kernel will go into direct reclaim, and if nothing
can be written out ATM, it will invoke the OOM Killer.

I figured out that the actual requirements also depend on the target
device driver, filesystem (e.g. NFS is considerably more memory-hungry
than ext3) and (not the least) number of online CPUs.

The calculation is quite complex, and that's probably why nobody has
done it properly yet.

Petr T



[Index of Archives]     [LM Sensors]     [Linux Sound]     [ALSA Users]     [ALSA Devel]     [Linux Audio Users]     [Linux Media]     [Kernel]     [Gimp]     [Yosemite News]     [Linux Media]

  Powered by Linux