Re: [PATCH 3/3] mm/sched: memdelay: memory health interface for systems and workloads

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, 2017-07-31 at 14:41 -0400, Johannes Weiner wrote:
> 
> Adding an rq counter for tasks inside memdelay sections should be
> straight-forward as well (except for maybe the migration cost of that
> state between CPUs in ttwu that Mike pointed out).

What I pointed out should be easily eliminated (zero use case).
 
> That leaves the question of how to track these numbers per cgroup at
> an acceptable cost. The idea for a tree of cgroups is that walltime
> impact of delays at each level is reported for all tasks at or below
> that level. E.g. a leave group aggregates the state of its own tasks,
> the root/system aggregates the state of all tasks in the system; hence
> the propagation of the task state counters up the hierarchy.

The crux of the biscuit is where exactly the investment return lies.
 Gathering of these numbers ain't gonna be free, no matter how hard you
try, and you're plugging into paths where every cycle added is made of
userspace hide.

	-Mike

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxx.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href



[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]
  Powered by Linux