Re: [PATCH V3] memcg: add reclaim pgfault latency histograms

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Jun 20, 2011 at 5:02 PM, KAMEZAWA Hiroyuki
<kamezawa.hiroyu@xxxxxxxxxxxxxx> wrote:
> On Sun, 19 Jun 2011 23:08:52 -0700
> Ying Han <yinghan@xxxxxxxxxx> wrote:
>
>> On Sunday, June 19, 2011, KAMEZAWA Hiroyuki
>> <kamezawa.hiroyu@xxxxxxxxxxxxxx> wrote:
>> > On Fri, 17 Jun 2011 16:53:48 -0700
>> > Ying Han <yinghan@xxxxxxxxxx> wrote:
>> >
>> >> This adds histogram to capture pagefault latencies on per-memcg basis. I used
>> >> this patch on the memcg background reclaim test, and figured there could be more
>> >> usecases to monitor/debug application performance.
>> >>
>> >> The histogram is composed 8 bucket in us unit. The last one is "rest" which is
>> >> everything beyond the last one. To be more flexible, the buckets can be reset
>> >> and also each bucket is configurable at runtime.
>> >>
>> >> memory.pgfault_histogram: exports the histogram on per-memcg basis and also can
>> >> be reset by echoing "-1". Meantime, all the buckets are writable by echoing
>> >> the range into the API. see the example below.
>> >>
>> >> change v3..v2:
>> >> no change except rebasing the patch to 3.0-rc3 and retested.
>> >>
>> >> change v2..v1:
>> >> 1. record the page fault involving reclaim only and changing the unit to us.
>> >> 2. rename the "inf" to "rest".
>> >> 3. removed the global tunable to turn on/off the recording. this is ok since
>> >> there is no overhead measured by collecting the data.
>> >> 4. changed reseting the history by echoing "-1".
>> >>
>> >> Functional Test:
>> >> $ cat /dev/cgroup/memory/D/memory.pgfault_histogram
>> >> page reclaim latency histogram (us):
>> >> < 150            22
>> >> < 200            17434
>> >> < 250            69135
>> >> < 300            17182
>> >> < 350            4180
>> >> < 400            3179
>> >> < 450            2644
>> >> < rest           29840
>> >>
>> >> $ echo -1 >/dev/cgroup/memory/D/memory.pgfault_histogram
>> >> $ cat /dev/cgroup/memory/B/memory.pgfault_histogram
>> >> page reclaim latency histogram (us):
>> >> < 150            0
>> >> < 200            0
>> >> < 250            0
>> >> < 300            0
>> >> < 350            0
>> >> < 400            0
>> >> < 450            0
>> >> < rest           0
>> >>
>> >> $ echo 500 520 540 580 600 1000 5000 >/dev/cgroup/memory/D/memory.pgfault_histogram
>> >> $ cat /dev/cgroup/memory/B/memory.pgfault_histogram
>> >> page reclaim latency histogram (us):
>> >> < 500            0
>> >> < 520            0
>> >> < 540            0
>> >> < 580            0
>> >> < 600            0
>> >> < 1000           0
>> >> < 5000           0
>> >> < rest           0
>> >>
>> >> Performance Test:
>> >> I ran through the PageFaultTest (pft) benchmark to measure the overhead of
>> >> recording the histogram. There is no overhead observed on both "flt/cpu/s"
>> >> and "fault/wsec".
>> >>
>> >> $ mkdir /dev/cgroup/memory/A
>> >> $ echo 16g >/dev/cgroup/memory/A/memory.limit_in_bytes
>> >> $ echo $$ >/dev/cgroup/memory/A/tasks
>> >> $ ./pft -m 15g -t 8 -T a
>> >>
>> >> Result:
>> >> $ ./ministat no_histogram histogram
>> >>
>> >> "fault/wsec"
>> >> x fault_wsec/no_histogram
>> >> + fault_wsec/histogram
>> >> +-------------------------------------------------------------------------+
>> >>     N           Min           Max        Median           Avg        Stddev
>> >> x   5     864432.44     880840.81     879707.95     874606.51     7687.9841
>> >> +   5     861986.57     877867.25      870823.9     870901.38     6413.8821
>> >> No difference proven at 95.0% confidence
>> >>
>> >> "flt/cpu/s"
>> >> x flt_cpu_s/no_histogram
>> >> + flt_cpu_s/histogram
>> >> +-------------------------------------------------------------------------+
>> >>     I'll never ack this.
>>
>> The patch is created as part of effort testing per-memcg bg reclaim
>> patch. I don't have strong opinion that we indeed need to merge it,
>> but found it is a useful testing and monitoring tool.
>>
>> Meantime, can you help to clarify your concern? In case I missed
>> something here.
>>
>
> I want to see the numbers via 'perf' because of its flexibility.
> For this kind of things, I like dumping "raw" data and parse it by
> tools. Because we can change our view with a single data without
> taking mulitple-data-by-multiple-experiments.
>
> I like your idea of histgram. So, I'd like to try to write a
> perf stuff when my memory.vmscan_stat is merged (it's good trace
> point I think) and see what we can get.

Thank you for the clarification. I have no strong objection of doing
it in perf except it might take some space and cpu-time to collecting
the information, which at the end we just need to increment a counter
:)

Thanks

--Ying

>
> Thanks,
> -Kame
>
>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxx.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href


[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]