Reducing the size of the dump file/speeding up collection

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

I've been using makedumpfile as the crash collector with the -d31
parameter. The machine this is being run on usually have 128-256GB of
ram and the resulting crash dumps are in the range of 14-20gb which is
very bug for the type of analysis I'm usually performing on crashed
machine. I was wondering whether there is a way  to further reduce the
size and the time to take the dump (now it takes around 25 minutes to
collect such a dump). I've seen reports where people with TBs of ram
take that long, meaning for a machine with 256gb it should be even
faster. I've been running this configuration on kernels 3.12.28 and 4.1
where mmap for the vmcore file is supported.

Please advise

Regards,
Nikolay



[Index of Archives]     [LM Sensors]     [Linux Sound]     [ALSA Users]     [ALSA Devel]     [Linux Audio Users]     [Linux Media]     [Kernel]     [Gimp]     [Yosemite News]     [Linux Media]

  Powered by Linux