On Thu, Jun 8, 2023 at 5:14 AM Jan Kara <jack@xxxxxxx> wrote: > > On Mon 24-10-22 05:28:41, Shakeel Butt wrote: > > Currently mm_struct maintains rss_stats which are updated on page fault > > and the unmapping codepaths. For page fault codepath the updates are > > cached per thread with the batch of TASK_RSS_EVENTS_THRESH which is 64. > > The reason for caching is performance for multithreaded applications > > otherwise the rss_stats updates may become hotspot for such > > applications. > > > > However this optimization comes with the cost of error margin in the rss > > stats. The rss_stats for applications with large number of threads can > > be very skewed. At worst the error margin is (nr_threads * 64) and we > > have a lot of applications with 100s of threads, so the error margin can > > be very high. Internally we had to reduce TASK_RSS_EVENTS_THRESH to 32. > > > > Recently we started seeing the unbounded errors for rss_stats for > > specific applications which use TCP rx0cp. It seems like > > vm_insert_pages() codepath does not sync rss_stats at all. > > > > This patch converts the rss_stats into percpu_counter to convert the > > error margin from (nr_threads * 64) to approximately (nr_cpus ^ 2). > > However this conversion enable us to get the accurate stats for > > situations where accuracy is more important than the cpu cost. Though > > this patch does not make such tradeoffs. > > > > Signed-off-by: Shakeel Butt <shakeelb@xxxxxxxxxx> > > Somewhat late to the game but our performance testing grid has noticed this > commit causes a performance regression on shell-heavy workloads. For > example running 'make test' in git sources on our test machine with 192 > CPUs takes about 4% longer, system time is increased by about 9%: > > before (9cd6ffa6025) after (f1a7941243c1) > Amean User 471.12 * 0.30%* 481.77 * -1.96%* > Amean System 244.47 * 0.90%* 269.13 * -9.09%* > Amean Elapsed 709.22 * 0.45%* 742.27 * -4.19%* > Amean CPU 100.00 ( 0.20%) 101.00 * -0.80%* > > Essentially this workload spawns in sequence a lot of short-lived tasks and > the task startup + teardown cost is what this patch increases. To > demonstrate this more clearly, I've written trivial (and somewhat stupid) > benchmark shell_bench.sh: > > for (( i = 0; i < 20000; i++ )); do > /bin/true > done > > And when run like: > > numactl -C 1 ./shell_bench.sh > > (I've forced physical CPU binding to avoid task migrating over the machine > and cpu frequency scaling interfering which makes the numbers much more > noisy) I get the following elapsed times: > > 9cd6ffa6025 f1a7941243c1 > Avg 6.807429 7.631571 > Stddev 0.021797 0.016483 > > So some 12% regression in elapsed time. Just to be sure I've verified that > per-cpu allocator patch [1] does not improve these numbers in any > significant way. > > Where do we go from here? I think in principle the problem could be fixed > by being clever and when the task has only a single thread, we don't bother > with allocating pcpu counter (and summing it at the end) and just account > directly in mm_struct. When the second thread is spawned, we bite the > bullet, allocate pcpu counter and start with more scalable accounting. > These shortlived tasks in shell workloads or similar don't spawn any > threads so this should fix the regression. But this is obviously easier > said than done... > > Honza > > [1] https://lore.kernel.org/all/20230606125404.95256-1-yu.ma@xxxxxxxxx/ Another regression reported earlier: https://lore.kernel.org/linux-mm/202301301057.e55dad5b-oliver.sang@xxxxxxxxx/