> > > > Well, we have done some experiments to try to get the statistical memory > > range which kdump really need. Then a final reservation will be > > calculated automatically as (base_value + linear growth of total memory). > > If one machine has 200GB memory, its reservation will grow too. Since > > except of the bitmap cost, other memory cost is almost fixed. > > > > Per this scheme things should be go well, if memory always goes to the > > edge of OOM, an adjust of base_value is needed. So a constant value as > > you said may not be needed. > > That logic is old and we probably should get rid of at some point. We > don't want makedumpfile's memory usage to go up because system has > more physical RAM. That's why cyclic mode was introduced. > > > > > Instead, I am wondering how the 80% comes from, and why 20% of free > > memory must be safe. > > I had come up with this 80% number randomly. So you think that's the > problem? As Astushi said, when memory amount goes up, the bitmap also goes up. The cyclic should solve this problem, but seems it doesn't. In the bug we found, it always get the whole bitmap size to cyclic_bugsize. And that is because 80% number. We are try to add debug code to see what is going on during makedumpfile running. > > I am still scratching my head that why 30MB is not sufficient for > makdumpfile. > > Thanks > Vivek > > _______________________________________________ > kexec mailing list > kexec at lists.infradead.org > http://lists.infradead.org/mailman/listinfo/kexec