[PATCH v8 9/9] vmcore: support mmap() on /proc/vmcore

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, 1 Jul 2013 18:34:43 +0400 Maxim Uvarov <muvarov at gmail.com> wrote:

> 2013/7/1 HATAYAMA Daisuke <d.hatayama at jp.fujitsu.com>
> 
> > (2013/06/29 1:40), Maxim Uvarov wrote:
> >
> >> Did test on 1TB machine. Total vmcore capture and save took 143 minutes
> >> while vmcore size increased from 9Gb to 59Gb.
> >>
> >> Will do some debug for that.
> >>
> >> Maxim.
> >>
> >
> > Please show me your kdump configuration file and tell me what you did in
> > the test and how you confirmed the result.
> >
> >
> Hello Hatayama,
> 
> I re-run tests in dev env. I took your latest kernel patchset from
> patchwork for vmcore + devel branch of makedumpfile + fix to open and write
> to /dev/null. Run this test on 1Tb memory machine with memory used by some
> user space processes. crashkernel=384M.
> 
> Please see my results for makedumpfile process work:
> [gzip compression]
> -c -d31 /dev/null
> real 37.8 m
> user 29.51 m
> sys 7.12 m
> 
> [no compression]
> -d31 /dev/null
> real 27 m
> user 23 m
> sys   4 m
> 
> [no compression, disable cyclic mode]
> -d31 --non-cyclic /dev/null
> real 26.25 m
> user 23 m
> sys 3.13 m
> 
> [gzip compression]
> -c -d31 /dev/null
> % time     seconds  usecs/call     calls    errors syscall
> ------ ----------- ----------- --------- --------- ----------------
>  54.75   38.840351         110    352717           mmap
>  44.55   31.607620          90    352716         1 munmap
>   0.70    0.497668           0  25497667           brk
>   0.00    0.000356           0    111920           write
>   0.00    0.000280           0    111904           lseek
>   0.00    0.000025           4         7           open
>   0.00    0.000000           0       473           read
>   0.00    0.000000           0         7           close
>   0.00    0.000000           0         3           fstat
>   0.00    0.000000           0         1           getpid
>   0.00    0.000000           0         1           execve
>   0.00    0.000000           0         1           uname
>   0.00    0.000000           0         2           unlink
>   0.00    0.000000           0         1           arch_prctl
> ------ ----------- ----------- --------- --------- ----------------
> 100.00   70.946300              26427420         1 total
> 

I have no point of comparison here.  Is this performance good, or is
the mmap-based approach still a lot more expensive?





[Index of Archives]     [LM Sensors]     [Linux Sound]     [ALSA Users]     [ALSA Devel]     [Linux Audio Users]     [Linux Media]     [Kernel]     [Gimp]     [Yosemite News]     [Linux Media]

  Powered by Linux