Re: [PATCH] memcg: add pgfault latency histograms

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 26 May 2011 21:45:28 -0700
Ying Han <yinghan@xxxxxxxxxx> wrote:

> On Thu, May 26, 2011 at 7:11 PM, KAMEZAWA Hiroyuki
> <kamezawa.hiroyu@xxxxxxxxxxxxxx> wrote:
> > On Thu, 26 May 2011 18:40:44 -0700
> > Ying Han <yinghan@xxxxxxxxxx> wrote:
> >
> >> On Thu, May 26, 2011 at 5:31 PM, KAMEZAWA Hiroyuki
> >> <kamezawa.hiroyu@xxxxxxxxxxxxxx> wrote:
> >> > On Thu, 26 May 2011 17:23:20 -0700
> >> > Ying Han <yinghan@xxxxxxxxxx> wrote:
> >> >
> >> >> On Thu, May 26, 2011 at 5:05 PM, KAMEZAWA Hiroyuki <
> >> >> kamezawa.hiroyu@xxxxxxxxxxxxxx> wrote:
> >> >>
> >> >> > On Thu, 26 May 2011 14:07:49 -0700
> >> >> > Ying Han <yinghan@xxxxxxxxxx> wrote:
> >> >> >
> >> >> > > This adds histogram to capture pagefault latencies on per-memcg basis. I
> >> >> > used
> >> >> > > this patch on the memcg background reclaim test, and figured there could
> >> >> > be more
> >> >> > > usecases to monitor/debug application performance.
> >> >> > >
> >> >> > > The histogram is composed 8 bucket in ns unit. The last one is infinite
> >> >> > (inf)
> >> >> > > which is everything beyond the last one. To be more flexible, the buckets
> >> >> > can
> >> >> > > be reset and also each bucket is configurable at runtime.
> >> >> > >
> >> >> > > memory.pgfault_histogram: exports the histogram on per-memcg basis and
> >> >> > also can
> >> >> > > be reset by echoing "reset". Meantime, all the buckets are writable by
> >> >> > echoing
> >> >> > > the range into the API. see the example below.
> >> >> > >
> >> >> > > /proc/sys/vm/pgfault_histogram: the global sysfs tunablecan be used to
> >> >> > turn
> >> >> > > on/off recording the histogram.
> >> >> > >
> >> >> > > Functional Test:
> >> >> > > Create a memcg with 10g hard_limit, running dd & allocate 8g anon page.
> >> >> > > Measure the anon page allocation latency.
> >> >> > >
> >> >> > > $ mkdir /dev/cgroup/memory/B
> >> >> > > $ echo 10g >/dev/cgroup/memory/B/memory.limit_in_bytes
> >> >> > > $ echo $$ >/dev/cgroup/memory/B/tasks
> >> >> > > $ dd if=/dev/zero of=/export/hdc3/dd/tf0 bs=1024 count=20971520 &
> >> >> > > $ allocate 8g anon pages
> >> >> > >
> >> >> > > $ echo 1 >/proc/sys/vm/pgfault_histogram
> >> >> > >
> >> >> > > $ cat /dev/cgroup/memory/B/memory.pgfault_histogram
> >> >> > > pgfault latency histogram (ns):
> >> >> > > < 600 Â Â Â Â Â Â2051273
> >> >> > > < 1200 Â Â Â Â Â 40859
> >> >> > > < 2400 Â Â Â Â Â 4004
> >> >> > > < 4800 Â Â Â Â Â 1605
> >> >> > > < 9600 Â Â Â Â Â 170
> >> >> > > < 19200 Â Â Â Â Â82
> >> >> > > < 38400 Â Â Â Â Â6
> >> >> > > < inf      Â0
> >> >> > >
> >> >> > > $ echo reset >/dev/cgroup/memory/B/memory.pgfault_histogram
> >> >> > > $ cat /dev/cgroup/memory/B/memory.pgfault_histogram
> >> >> > > pgfault latency histogram (ns):
> >> >> > > < 600 Â Â Â Â Â Â0
> >> >> > > < 1200 Â Â Â Â Â 0
> >> >> > > < 2400 Â Â Â Â Â 0
> >> >> > > < 4800 Â Â Â Â Â 0
> >> >> > > < 9600 Â Â Â Â Â 0
> >> >> > > < 19200 Â Â Â Â Â0
> >> >> > > < 38400 Â Â Â Â Â0
> >> >> > > < inf      Â0
> >> >> > >
> >> >> > > $ echo 500 520 540 580 600 1000 5000
> >> >> > >/dev/cgroup/memory/B/memory.pgfault_histogram
> >> >> > > $ cat /dev/cgroup/memory/B/memory.pgfault_histogram
> >> >> > > pgfault latency histogram (ns):
> >> >> > > < 500 Â Â Â Â Â Â50
> >> >> > > < 520 Â Â Â Â Â Â151
> >> >> > > < 540 Â Â Â Â Â Â3715
> >> >> > > < 580 Â Â Â Â Â Â1859812
> >> >> > > < 600 Â Â Â Â Â Â202241
> >> >> > > < 1000 Â Â Â Â Â 25394
> >> >> > > < 5000 Â Â Â Â Â 5875
> >> >> > > < inf      Â186
> >> >> > >
> >> >> > > Performance Test:
> >> >> > > I ran through the PageFaultTest (pft) benchmark to measure the overhead
> >> >> > of
> >> >> > > recording the histogram. There is no overhead observed on both
> >> >> > "flt/cpu/s"
> >> >> > > and "fault/wsec".
> >> >> > >
> >> >> > > $ mkdir /dev/cgroup/memory/A
> >> >> > > $ echo 16g >/dev/cgroup/memory/A/memory.limit_in_bytes
> >> >> > > $ echo $$ >/dev/cgroup/memory/A/tasks
> >> >> > > $ ./pft -m 15g -t 8 -T a
> >> >> > >
> >> >> > > Result:
> >> >> > > "fault/wsec"
> >> >> > >
> >> >> > > $ ./ministat no_histogram histogram
> >> >> > > x no_histogram
> >> >> > > + histogram
> >> >> > >
> >> >> > +--------------------------------------------------------------------------+
> >> >> > >  ÂN      Min      Max    ÂMedian      Avg
> >> >> > ÂStddev
> >> >> > > x  5   813404.51   824574.98   Â821661.3   820470.83
> >> >> > 4202.0758
> >> >> > > + Â 5 Â Â 821228.91 Â Â 825894.66 Â Â 822874.65 Â Â 823374.15
> >> >> > 1787.9355
> >> >> > >
> >> >> > > "flt/cpu/s"
> >> >> > >
> >> >> > > $ ./ministat no_histogram histogram
> >> >> > > x no_histogram
> >> >> > > + histogram
> >> >> > >
> >> >> > +--------------------------------------------------------------------------+
> >> >> > >  ÂN      Min      Max    ÂMedian      Avg
> >> >> > ÂStddev
> >> >> > > x  5   104951.93   106173.13   105142.73   Â105349.2
> >> >> > 513.78158
> >> >> > > + Â 5 Â Â 104697.67 Â Â Â105416.1 Â Â 104943.52 Â Â 104973.77
> >> >> > 269.24781
> >> >> > > No difference proven at 95.0% confidence
> >> >> > >
> >> >> > > Signed-off-by: Ying Han <yinghan@xxxxxxxxxx>
> >> >> >
> >> >> > Hmm, interesting....but isn't it very very very complicated interface ?
> >> >> > Could you make this for 'perf' ? Then, everyone (including someone who
> >> >> > don't use memcg)
> >> >> > will be happy.
> >> >> >
> >> >>
> >> >> Thank you for looking at it.
> >> >>
> >> >> There is only one per-memcg API added which is basically exporting the
> >> >> histogram. The "reset" and reconfiguring the bucket is not "must" but make
> >> >> it more flexible. Also, the sysfs API can be reduced if necessary since
> >> >> there is no over-head observed by always turning it on anyway.
> >> >>
> >> >> I am not familiar w/ perf, any suggestions how it is supposed to be look
> >> >> like?
> >> >>
> >> >> Thanks
> >> >>
> >> >
> >> > IIUC, you can record "all" latency information by perf record. Then, latency
> >> > information can be dumped out to some file.
> >> >
> >> > You can add a python? script for perf as
> >> >
> >> > Â# perf report memory-reclaim-latency-histgram -f perf.data
> >> > Â Â Â Â Â Â Â Â-o 500,1000,1500,2000.....
> >> > Â ...show histgram in text.. or report the histgram in graphic.
> >> >
> >> > Good point is
> >> > Â- you can reuse perf.data and show histgram from another point of view.
> >> >
> >> > Â- you can show another cut of view, for example, I think you can write a
> >> > Â Âparser to show "changes in hisgram by time", easily.
> >> > Â ÂYou may able to generate a movie ;)
> >> >
> >> > Â- Now, perf cgroup is supported. Then,
> >> > Â Â- you can see per task histgram
> >> > Â Â- you can see per cgroup histgram
> >> > Â Â- you can see per system-wide histgram
> >> > Â Â Â(If you record latency of usual kswapd/alloc_pages)
> >> >
> >> > Â- If you record latency within shrink_zone(), you can show per-zone
> >> > Â Âreclaim latency histgram. record parsers can gather them and
> >> > Â Âshow histgram. This will be benefical to cpuset users.
> >> >
> >> >
> >> > I'm sorry if I miss something.
> >>
> >> After study a bit on perf, it is not feasible in this casecase. The
> >> cpu & memory overhead of perf is overwhelming.... Each page fault will
> >> generate a record in the buffer and how many data we can record in the
> >> buffer, and how many data will be processed later.. Most of the data
> >> that is recorded by the general perf framework is not needed here.
> >>
> >
> > I disagree. "each page fault" is not correct. "every lru scan" is correct.
> > Then, record to buffer will be at most memory.failcnt times.
> 
> Hmm. Sorry I might miss something here... :(
> 
> The page fault histogram recorded is per page-fault, only the ones
> trigger reclaim. The background reclaim testing is just one usecase of
> it, and we need this information for more
> general usage to monitor application performance. So i recorded the
> latency for each single page fault.
> 

BTW, why page-fault only ? For some applications, file cache is more imporatant.
I think usual page fault's usual cost is not in interest.
you can get PGPGIN statistics from other source.

Anyway, I think it's better for record reclaim latency.


Thanks,
-Kame


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxxx  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@xxxxxxxxx";> email@xxxxxxxxx </a>


[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]