Hello Kumagai-san, Thanks for the evaluation. So I'll re-post the patch soon removing RFC prefix in header. But there is a remaining fix for checking command-line parameters relevant to addition of the 'l' option. How about the patch adding compression/IO time report? I intended it only for the presentation of this RFC. I'll post the corresponding patch in crash's side after crash 6.0.2 is released, waiting for new configuration editing feature as Dave has explained. Thanks. HATAYAMA, Daisuke From: Atsushi Kumagai <kumagai-atsushi@xxxxxxxxxxxxxxxxx> Subject: Re: [RFC] makedumpfile, crash: LZO compression support Date: Mon, 5 Dec 2011 17:50:55 +0900 >> > Hello Hatayama-san, >> > >> > Thank you for your work. >> > >> >> Performance Comparison: >> >> >> >> Sample Data >> >> >> >> Ideally, I must have measured the performance for many enough >> >> vmcores generated from machines that was actually running, but now >> >> I don't have enough sample vmcores, I couldn't do so. So this >> >> comparison doesn't answer question on I/O time improvement. This >> >> is TODO for now. >> > >> > I'll measure the performance for actual vmcores by makedumpfile. >> > Please wait for a while. > > I measured the performance of makedumpfile for some vmcores. > Please see below. > > > Sample Data > > To simulate a working server, I captured VMCOREs while almost all pages > were alloceted and filled with random data. (See attached file "fill_random.c") > > I captured the VMCOREs of 5GB, 7.5GB and 10GB in the same condition. > > How to measure > > I measured the total execution time and the size of output file. > > $ time makedumpfile --message-level 16 [-c|-l| ] vmcore dumpfile > > Result > > See attached file "result.txt". > > > This time, lzo's compression was the quickest, and lzo's compression ratio is > almost the same(only a bit worse) as zlib's. > It seems good, and I will merge the patch set into the makedumpfile. > > What is your opinion, Dave? > > > Thanks. > KUMAGAI, Atsushi > >> >> That's very helpful. Thanks in advance. >> >> But of course I'm also still looking for alternative way. >> >> Thanks. >> HATAYAMA, Daisuke >>