Hello. My primary concern is still the measuring of per-NUMA node execution time. First, I think exposing the aggregated data into the numa_stat file is loss of information. The data are collected per-CPU and then summed over NUMA nodes -- this could be easily done by the userspace consumer of the data, keeping the per-CPU data available. Second, comparing with the cpuacct implementation, yours has only jiffy granularity (I may have overlooked something or I miss some context, then it's a non-concern). IOW, to me it sounds like duplicating cpuacct job and if that is deemed useful for cgroup v2, I think it should be done (only once) and at proper place (i.e. how cputime is measured in the default hierarchy). The previous two are design/theoretical remarks, however, your patch misses measuring of other than fair_sched_class policy tasks. Is that intentional? My last two comments are to locality measurement but are based on no experience or specific knowledge. The seven percentile groups seem quite arbitrary to me, I find it strange that the ratio of cache-line size and u64 leaks and is fixed in the generally visible file. Wouldn't such a form be better hidden under a _DEBUG config option? On Thu, Nov 28, 2019 at 10:09:13AM +0800, 王贇 <yun.wang@xxxxxxxxxxxxxxxxx> wrote: > Consider it as load_1/5/15 which not accurate but tell the trend of system I understood your patchset provides cumulative data over time, i.e. if a user wants to see an immediate trend, they have to calculate differences. Have I overlooked some back-off or regular zeroing? Michal
Attachment:
signature.asc
Description: Digital signature