Re: [PATCH v14 09/15] mm/damon: Add tracepoints

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, 2 Jun 2020 15:12:49 +0200
SeongJae Park <sjpark@xxxxxxxxxx> wrote:

> From: SeongJae Park <sjpark@xxxxxxxxx>
> 
> This commit adds a tracepoint for DAMON.  It traces the monitoring
> results of each region for each aggregation interval.  Using this, DAMON
> will be easily integrated with any tracepoints supporting tools such as
> perf.
> 
> Signed-off-by: SeongJae Park <sjpark@xxxxxxxxx>
> Reviewed-by: Leonard Foerster <foersleo@xxxxxxxxx>
> ---
>  include/trace/events/damon.h | 43 ++++++++++++++++++++++++++++++++++++
>  mm/damon.c                   |  5 +++++
>  2 files changed, 48 insertions(+)
>  create mode 100644 include/trace/events/damon.h
> 
> diff --git a/include/trace/events/damon.h b/include/trace/events/damon.h
> new file mode 100644
> index 000000000000..22236642d366
> --- /dev/null
> +++ b/include/trace/events/damon.h
> @@ -0,0 +1,43 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +#undef TRACE_SYSTEM
> +#define TRACE_SYSTEM damon
> +
> +#if !defined(_TRACE_DAMON_H) || defined(TRACE_HEADER_MULTI_READ)
> +#define _TRACE_DAMON_H
> +
> +#include <linux/types.h>
> +#include <linux/tracepoint.h>
> +
> +TRACE_EVENT(damon_aggregated,
> +
> +	TP_PROTO(int pid, unsigned int nr_regions,
> +		unsigned long vm_start, unsigned long vm_end,
> +		unsigned int nr_accesses),
> +
> +	TP_ARGS(pid, nr_regions, vm_start, vm_end, nr_accesses),
> +
> +	TP_STRUCT__entry(
> +		__field(int, pid)
> +		__field(unsigned int, nr_regions)
> +		__field(unsigned long, vm_start)
> +		__field(unsigned long, vm_end)
> +		__field(unsigned int, nr_accesses)
> +	),
> +
> +	TP_fast_assign(
> +		__entry->pid = pid;
> +		__entry->nr_regions = nr_regions;
> +		__entry->vm_start = vm_start;
> +		__entry->vm_end = vm_end;
> +		__entry->nr_accesses = nr_accesses;
> +	),
> +
> +	TP_printk("pid=%d nr_regions=%u %lu-%lu: %u", __entry->pid,
> +			__entry->nr_regions, __entry->vm_start,
> +			__entry->vm_end, __entry->nr_accesses)
> +);
> +
> +#endif /* _TRACE_DAMON_H */
> +
> +/* This part must be outside protection */
> +#include <trace/define_trace.h>
> diff --git a/mm/damon.c b/mm/damon.c
> index 6b0b8f21a6c6..af6f395fe06c 100644
> --- a/mm/damon.c
> +++ b/mm/damon.c
> @@ -9,6 +9,8 @@
>  
>  #define pr_fmt(fmt) "damon: " fmt
>  
> +#define CREATE_TRACE_POINTS
> +
>  #include <linux/damon.h>
>  #include <linux/debugfs.h>
>  #include <linux/delay.h>
> @@ -20,6 +22,7 @@
>  #include <linux/sched/mm.h>
>  #include <linux/sched/task.h>
>  #include <linux/slab.h>
> +#include <trace/events/damon.h>
>  
>  /* Minimal region size.  Every damon_region is aligned by this. */
>  #define MIN_REGION PAGE_SIZE
> @@ -650,6 +653,8 @@ static void kdamond_reset_aggregated(struct damon_ctx *c)
>  			damon_write_rbuf(c, &r->vm_end, sizeof(r->vm_end));
>  			damon_write_rbuf(c, &r->nr_accesses,
>  					sizeof(r->nr_accesses));
> +			trace_damon_aggregated(t->pid, nr,
> +					r->vm_start, r->vm_end, r->nr_accesses);

For a little better code, what about passing in t and r directly, and then
having the TP_fast_assign just do the dereferencing there?

	__entry->pid = t->pid;
	__entry->vm_start = r->vm_start;
	__entry->vm_end = r->vm_end;
	__entry->nr_accesses = r->nr_accesses;

It will produce better code at the trace point call (which is the important
part) and make the trace event a bit more flexible in the future, without
having to modify the call site.

-- Steve


>  			r->nr_accesses = 0;
>  		}
>  	}





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux