makedumpfile memory usage grows with system memory size

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello Ken'ichi-san,

I was talking to Vivek about kdump memory requirements and he mentioned
that they vary based on how much system memory is used.

I was interested in knowing why that was and again he mentioned that
makedumpfile needed lots of memory if it was running on a large machine
(for example 1TB of system memory).

Looking through the makedumpfile README and using what Vivek remembered of
makedumpfile, we gathered that as the number of pages grows, the more
makedumpfile has to temporarily store the information in memory.  The
possible reason was to calculate the size of the file before it was copied
to its final destination?

I was curious if that was true and if it was, would it be possible to only
process memory in chunks instead of all at once.

The idea is that a machine with 4Gigs of memory should consume the same
the amount of kdump runtime memory as a 1TB memory system.

Just trying to research ways to keep the memory requirements consistent
across all memory ranges.

Thanks,
Don




[Index of Archives]     [LM Sensors]     [Linux Sound]     [ALSA Users]     [ALSA Devel]     [Linux Audio Users]     [Linux Media]     [Kernel]     [Gimp]     [Yosemite News]     [Linux Media]

  Powered by Linux