Reducing the size of the dump file/speeding up collection

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 09/16/2015 04:30 PM, Nikolay Borisov wrote:
> Hello,
>
> I've been using makedumpfile as the crash collector with the -d31
> parameter. The machine this is being run on usually have 128-256GB of
> ram and the resulting crash dumps are in the range of 14-20gb which is
> very bug for the type of analysis I'm usually performing on crashed
> machine. I was wondering whether there is a way  to further reduce the
> size and the time to take the dump (now it takes around 25 minutes to
> collect such a dump). I've seen reports where people with TBs of ram
> take that long, meaning for a machine with 256gb it should be even
> faster. I've been running this configuration on kernels 3.12.28 and 4.1
> where mmap for the vmcore file is supported.
>
> Please advise

Hi nikolay,

Yes, this issue is what we are concerning a lot.
About the current situation, try --split, it will save time.


And lzo/snappy instead of zlib, these two compression format are faster
but need more space to save. Or if you still want zlib (to save space),
try multiple threads, check the following site, it will help you:

https://lists.fedoraproject.org/pipermail/kexec/2015-September/002322.html


-- 
Regards
Qiao Nuohan

>
> Regards,
> Nikolay
>
> _______________________________________________
> kexec mailing list
> kexec at lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/kexec
>





[Index of Archives]     [LM Sensors]     [Linux Sound]     [ALSA Users]     [ALSA Devel]     [Linux Audio Users]     [Linux Media]     [Kernel]     [Gimp]     [Yosemite News]     [Linux Media]

  Powered by Linux