On Mon 22-08-16 23:12:41, Minchan Kim wrote: > On Mon, Aug 22, 2016 at 09:40:52AM +0200, Michal Hocko wrote: > > On Mon 22-08-16 09:07:45, Minchan Kim wrote: > > [...] > > > #!/bin/sh > > > ./smap_test & > > > pid=$! > > > > > > for i in $(seq 25) > > > do > > > awk '/^Rss/{rss+=$2} /^Pss/{pss+=$2} END {}' \ > > > /proc/$pid/smaps > > > done > > > kill $pid > > > > > > root@bbox:/home/barrios/test/smap# time ./s.sh > > > pid:21973 > > > > > > real 0m17.812s > > > user 0m12.612s > > > sys 0m5.187s > > > > retested on the bare metal (x86_64 - 2CPUs) > > Command being timed: "sh s.sh" > > User time (seconds): 0.00 > > System time (seconds): 18.08 > > Percent of CPU this job got: 98% > > Elapsed (wall clock) time (h:mm:ss or m:ss): 0:18.29 > > > > multiple runs are quite consistent in those numbers. I am running with > > $ awk --version > > GNU Awk 4.1.3, API: 1.1 (GNU MPFR 3.1.4, GNU MP 6.1.0) > > > > > > like a problem we are not able to address. And I would even argue that > > > > we want to address it in a generic way as much as possible. > > > > > > Sure. What solution do you think as generic way? > > > > either optimize seq_printf or replace it with something faster. > > If it's real culprit, I agree. However, I tested your test program on > my 2 x86 machines and my friend's machine. > > Ubuntu, Fedora, Arch > > They have awk 4.0.1 and 4.1.3. > > Result are same. Userspace speand more times I mentioned. > > [root@blaptop smap_test]# time awk '/^Rss/{rss+=$2} /^Pss/{pss+=$2} END {printf "rss:%d pss:%d\n", rss, pss}' /proc/3552/smaps > rss:263484 pss:262188 > > real 0m0.770s > user 0m0.574s > sys 0m0.197s > > I will attach my test progrma source. > I hope you guys test and repost the result because it's the key for direction > of patchset. Hmm, this is really interesting. I have checked a different machine and it shows different results. Same code, slightly different version of awk (4.1.0) and the results are different Command being timed: "awk /^Rss/{rss+=$2} /^Pss/{pss+=$2} END {printf "rss:%d pss:%d\n", rss, pss} /proc/48925/smaps" User time (seconds): 0.43 System time (seconds): 0.27 I have no idea why those numbers are so different on my laptop yet. It surely looks suspicious. I will try to debug this further tomorrow. Anyway, the performance is just one side of the problem. I have tried to express my concerns about a single exported pss value in other email. Please try to step back and think about how useful is this information without the knowing which resource we are talking about. -- Michal Hocko SUSE Labs -- To unsubscribe from this list: send the line "unsubscribe linux-doc" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html