Hi Ken'ichi-san, On Thu, Mar 29, 2012 at 05:09:18PM +0900, Ken'ichi Ohmichi wrote: > > Hi Don-san, > > On Wed, 28 Mar 2012 17:22:04 -0400 > Don Zickus <dzickus at redhat.com> wrote: > > > > I was talking to Vivek about kdump memory requirements and he mentioned > > that they vary based on how much system memory is used. > > > > I was interested in knowing why that was and again he mentioned that > > makedumpfile needed lots of memory if it was running on a large machine > > (for example 1TB of system memory). > > > > Looking through the makedumpfile README and using what Vivek remembered of > > makedumpfile, we gathered that as the number of pages grows, the more > > makedumpfile has to temporarily store the information in memory. The > > possible reason was to calculate the size of the file before it was copied > > to its final destination? > > makedumpfile uses the system memory of 2nd-kernel for a bitmap if RHEL. > The bitmap represents each page of 1st-kernel is excluded or not. > So the bitmap size depends on 1st-kernel's system memory. > > makedumpfile creates a file /tmp/kdump_bitmapXXXXXX as the bitmap, > and the file is created on 2nd-kernel's memory if RHEL, because > RHEL does not mount a root filesystem when 2nd-kernel is running. Ok. > > > > I was curious if that was true and if it was, would it be possible to only > > process memory in chunks instead of all at once. > > > > The idea is that a machine with 4Gigs of memory should consume the same > > the amount of kdump runtime memory as a 1TB memory system. > > > > Just trying to research ways to keep the memory requirements consistent > > across all memory ranges. > > I think the above purpose is good, and I don't have any idea for reducing > the bitmap size. And now I am out of makedumpfile development. > Kumagai-san is the makedumpfile maintainer now, and he will help you. Thanks for the feedback, I'll wait for Kumagai-san's response then. Cheers, Don