On Tue, May 27, 2014 at 05:34:05AM +0000, Atsushi Kumagai wrote: [..] > >So to me bottom line is that once the write out starts, kernel needs > >memory for holding dirty and writeback pages in cache too. So we probably > >are being too aggresive in allocating 80% of free memory for bitmaps. May > >be we should drop it down to 50-60% of free memory for bitmaps. > > I don't disagree to changing the 80% limit but I prefer to remove such > a percentage threshold because it's dependent on the environment. > Actually, I think it makes this problem more complex. > > Now, thanks to page_is_buddy(), the performance degradation caused by > multi-cycle processing looks very small according to the benchmark on > 2TB memory: > > https://lkml.org/lkml/2013/3/26/914 > > This result means we don't need to make an effort to allocate the bitmap > buffer as large as possible. So how about just setting a small fixed value > like 5MB as a safety limit? > It may be safer, and it will be easier to estimate the total memory usage of > makedumpfile, so I think it's better way if the most users especially large > machine users accept it. Hi Atsushi, If increasing buffer size does not have any significant increase in dump time, then it is reasonable to have fixed buffer size for bitmaps. (instead of trying to maximize bitmap size). We can probably go for 4MB as bitmap size (instead of 5MB). Also can we modify the logic a bit so that we automatically shrhink the size of bitmap if sufficient memory is not available. Say assume that 60% of available memory can be used for bitmaps. If it is less then 4MB then we drop the buffer size and that hopefully still makes makedumpfile successful (instead of being OOMed). Thanks Vivek