On 01/08/2019 11:11 AM, Michal Hocko wrote: > On Tue 08-01-19 13:04:22, Dave Chinner wrote: >> On Mon, Jan 07, 2019 at 05:41:39PM -0500, Waiman Long wrote: >>> On 01/07/2019 05:32 PM, Dave Chinner wrote: >>>> On Mon, Jan 07, 2019 at 10:12:56AM -0500, Waiman Long wrote: >>>>> As newer systems have more and more IRQs and CPUs available in their >>>>> system, the performance of reading /proc/stat frequently is getting >>>>> worse and worse. >>>> Because the "roll-your-own" per-cpu counter implementaiton has been >>>> optimised for low possible addition overhead on the premise that >>>> summing the counters is rare and isn't a performance issue. This >>>> patchset is a direct indication that this "summing is rare and can >>>> be slow" premise is now invalid. >>>> >>>> We have percpu counter infrastructure that trades off a small amount >>>> of addition overhead for zero-cost reading of the counter value. >>>> i.e. why not just convert this whole mess to percpu_counters and >>>> then just use percpu_counter_read_positive()? Then we just don't >>>> care how often userspace reads the /proc file because there is no >>>> summing involved at all... >>>> >>>> Cheers, >>>> >>>> Dave. >>> Yes, percpu_counter_read_positive() is cheap. However, you still need to >>> pay the price somewhere. In the case of percpu_counter, the update is >>> more expensive. >> Ummm, that's exactly what I just said. It's a percpu counter that >> solves the "sum is expensive and frequent" problem, just like you >> are encountering here. I do not need basic scalability algorithms >> explained to me. >> >>> I would say the percentage of applications that will hit this problem is >>> small. But for them, this problem has some significant performance overhead. >> Well, duh! >> >> What I was suggesting is that you change the per-cpu counter >> implementation to the /generic infrastructure/ that solves this >> problem, and then determine if the extra update overhead is at all >> measurable. If you can't measure any difference in update overhead, >> then slapping complexity on the existing counter to attempt to >> mitigate the summing overhead is the wrong solution. >> >> Indeed, it may be that you need o use a custom batch scaling curve >> for the generic per-cpu coutner infrastructure to mitigate the >> update overhead, but the fact is we already have generic >> infrastructure that solves your problem and so the solution should >> be "use the generic infrastructure" until it can be proven not to >> work. >> >> i.e. prove the generic infrastructure is not fit for purpose and >> cannot be improved sufficiently to work for this use case before >> implementing a complex, one-off snowflake counter implementation... > Completely agreed! Apart from that I find that conversion to a generic > infrastructure worth even if that doesn't solve the problem at hands > completely. If for no other reasons then the sheer code removal as kstat > is not really used for anything apart from this accounting AFAIR. The > less ad-hoc code we have the better IMHO. > > And to the underlying problem. Some proc files do not scale on large > machines. Maybe it is time to explain that to application writers that > if they are collecting data too agressively then it won't scale. We can > only do this much. Lying about numbers by hiding updates is, well, > lying and won't solve the underlying problem. I would not say it is lying. As I said in the changelog, reading /proc/stat infrequently will give the right counts. Only when it is read frequently that the data may not be up-to-date. Using percpu_counter_sum_positive() as suggested by Dave will guarantee that the counts will likely be off by a certain amount too. So it is also a trade-off between accuracy and performance. Cheers, Longman