On Fri, Mar 17, 2023 at 11:26 AM Jakub Kicinski <kuba@xxxxxxxxxx> wrote: > > On Fri, 17 Mar 2023 10:27:11 +0800 Jason Xing wrote: > > > That is the common case, and can be understood from the napi trace > > > > Thanks for your reply. It is commonly happening every day on many servers. > > Right but the common issue is the time squeeze, not budget squeeze, Most of them are about time, so yes. > and either way the budget squeeze doesn't really matter because > the softirq loop will call us again soon, if softirq itself is > not scheduled out. > > So if you want to monitor a meaningful event in your fleet, I think > a better event to monitor is the number of times ksoftirqd was woken > up and latency of it getting onto the CPU. It's a good point. Thanks for your advice. > > Did you try to measure that? > > (Please do *not* send patches to touch softirq code right now, just > measure first. We are trying to improve the situation but the core > kernel maintainers are weary of changes: > https://lwn.net/Articles/925540/ > so if both of us start sending code they will probably take neither > patches :() I understand. One more thing I would like to know is about the state of 1/2 patch. Thanks, Jason > > > > point and probing the kernel with bpftrace. We should only add > > > > We probably can deduce (or guess) which one causes the latency because > > trace_napi_poll() only counts the budget consumed per poll. > > > > Besides, tracing napi poll is totally ok with the testbed but not ok > > with those servers with heavy load which bpftrace related tools > > capturing the data from the hot path may cause some bad impact, > > especially with special cards equipped, say, 100G nic card. Resorting > > to legacy file softnet_stat is relatively feasible based on my limited > > knowledge. > > Right, but we're still measuring something relatively irrelevant. > As I said the softirq loop will call us again. In my experience > network queues get long when ksoftirqd is woken up but not scheduled > for a long time. That is the source of latency. You may have the same > problem (high latency) without consuming the entire budget. > > I think if we wanna make new stats we should try to come up with a way > of capturing the problem rather than one of the symptoms. > > > Paolo also added backlog queues into this file in 2020 (see commit: > > 7d58e6555870d). I believe that after this patch, there are few or no > > more new data that is needed to print for the next few years. > > > > > uAPI for statistics which must be maintained contiguously. For > > > > In this patch, I didn't touch the old data as suggested in the > > previous emails and only separated the old way of counting > > @time_squeeze into two parts (time_squeeze and budget_squeeze). Using > > budget_squeeze can help us profile the server and tune it more > > usefully. > > > > > investigations tracing will always be orders of magnitude more > > > powerful :( > > > > > On the time squeeze BTW, have you found out what the problem was? > > > In workloads I've seen the time problems are often because of noise > > > in how jiffies are accounted (cgroup code disables interrupts > > > for long periods of time, for example, making jiffies increment > > > by 2, 3 or 4 rather than by 1). > > > > Yes ! The issue of jiffies increment troubles those servers more often > > than not. For a small group of servers, budget limit is also a > > problem. Sometimes we might treat guest OS differently.