makedumpfile utility optimization

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



>Atsushi,
>
>Thanks a lot for a quick reply.
>
>These results are on the following processor: E5504  @ 2.00GHz
>We are running with SMP disabled (to be on the conservative side) so only
>one CPU.
>The dumpable pages were ~50000 out of 0x3c0000 (16G of RAM).
>After dumping about 50000, it took more than 30 minutes to evaluate the
>rest of the pages so our watchdog fired.
>I put a print statement to print progress after processing every 10,000
>pfns and I noticed that it was taking approximately 5 seconds to process
>10,000 pfns (there must be something else going on that I?ll need to look
>into).

Thanks for your report, it sounds good improvement.

>Anyway, thanks for the confirmation that it is safe to use my patch.
>Do you want me to commit my patch to the source of makedumpfile?

Yeah, I'll be happy to get your patch.
For that, I would like you to rebase the patch on the devel branch.

  https://sourceforge.net/p/makedumpfile/code/ci/devel/tree/

Recent makedumpfile have multithread feature, it calls write_kdump_pages_parallel_cyclic()
instead of write_kdump_pages_cyclic(), so you need to modify it too.


Thanks,
Atsushi Kumagai

>-Hemanth
>
>
>On 4/14/16, 10:24 PM, "Atsushi Kumagai" <ats-kumagai at wm.jp.nec.com> wrote:
>
>>(CC'ing kexec-ML)
>>
>>Hello Hemanth,
>>
>>>Hi 'makedumpfile' utility developers,
>>>
>>>I'm using version 1.5.6 and I see that we can optimize the utility using
>>>this patch:
>>>
>>>--- makedumpfile-1.5.6/makedumpfile.c   2014-04-20 18:59:18.000000000
>>>-0700
>>>+++ makedumpfile-1.5.6-changed/makedumpfile.c   2016-04-11
>>>18:47:50.019563738 -0700
>>>@@ -6475,6 +6475,15 @@
>>>
>>>        for (pfn = start_pfn; pfn < end_pfn; pfn++) {
>>>
>>>+               /*
>>>+                * There's no point in checking other pages if we've
>>>already dumped
>>>+                * all the pages that are dumpable
>>>+                */
>>>+               if (num_dumped == info->num_dumpable) {
>>>+                       ret = TRUE;
>>>+                       goto out;
>>>+               }
>>>+
>>>                if ((num_dumped % per) == 0)
>>>                        print_progress(PROGRESS_COPY, num_dumped,
>>>info->num_dumpable);
>>>
>>>Why are we looping even after we are done with all the dumpable pages to
>>>start with?
>>>I'm concerned if I'm missing something with this patch.
>>
>>You are right, it's better to break the loop after the last dumpable page
>>is written. I neglected that since the remains of loop just check the
>>bitmap and call continue, I thought the wasteful processing cost is
>>little.
>>I'm curious to know how much does this patch improve the performance.
>>
>>
>>Thanks,
>>Atsushi Kumagai
>
>
>_______________________________________________
>kexec mailing list
>kexec at lists.infradead.org
>http://lists.infradead.org/mailman/listinfo/kexec


[Index of Archives]     [LM Sensors]     [Linux Sound]     [ALSA Users]     [ALSA Devel]     [Linux Audio Users]     [Linux Media]     [Kernel]     [Gimp]     [Yosemite News]     [Linux Media]

  Powered by Linux