From: HATAYAMA Daisuke <d.hatayama@xxxxxxxxxxxxxx> Subject: Re: makedumpfile 1.5.0 takes much more time to dump Date: Fri, 21 Sep 2012 09:23:57 +0900 > From: Vivek Goyal <vgoyal at redhat.com> > Subject: makedumpfile 1.5.0 takes much more time to dump > Date: Thu, 20 Sep 2012 16:06:34 -0400 > >> Hi Atsushi san, >> >> We tried makedumpfile 1.5.0 on a 1TB machine and it seems to regress >> badly. We reserved 192MB of memory and following are test results. >> >> #1. makedumpfile-1.4.2 -E --message-level 1 -d 31 >> real 3m47.520s >> user 0m56.543s >> sys 2m41.631s >> >> #2. makedumpfile-1.5.0 -E --message-level 1 -d 31 >> real 52m25.262s >> user 32m51.310s >> sys 18m53.265s >> >> #3. makedumpfile-1.4.2 -c --message-level 1 -d 31 >> real 8m49.107s >> user 4m34.180s >> sys 4m8.691s >> >> #4. makedumpfile-1.5.0 -c --message-level 1 -d 31 >> real 46m48.985s >> user 29m35.203s >> sys 16m43.149s >> > > Hello Vivek, > > On v1.5.0 we cannot filter free pages in constant space becuase we > have yet to test it. Instead, the existing method is used here, which > repeats walking on a whole page frames the number of cycles times. > > As Kumagai-san explains, the number of cycles can be calculated by the > following expression: > > N = physical memory size / (page size * bit per byte(8) * BUFSIZE_CYCLIC) > > So, > > N = 2TB / (4KB * 8 * 1MB) = 64 cycles. > > I guess on this environment, it took about 50 seconds to filter free > pages in one cycle. > I noticed a careless miss. 1TB is correct on your case. N = 1TB / (4KB * 8 * 1MB) = 32 cycles. So, about 95 seconds for one cycle? Thanks. HATAYAMA, Daisuke