Dave Chiluk <chiluk+linux@xxxxxxxxxx> writes: > So I spent some more time testing this new patch as is *(interrupts disabled). I know I probably should have fixed the patch, but it's hard to get time on big test hardware sometimes, and I was already well along my way with testing. > > In regards to the quota usage overage I was seeing earlier: I have a theory as to what might be happening here, and I'm pretty sure it's related to the IRQs being disabled during the rq->lock walk. I think that the main fast thread was able to use an excess amount > of quota because the timer interrupt meant to stop it wasn't being handled timely due to the interrupts being disabled. On my 8 core machine this resulted in a what looked like simply improved usage of the quota, but when I ran the test on an 80 core machine I > saw a massive overage of cpu usage when running fibtest. Specifically when running fibtest for 5 seconds with 50ms quota/100ms period expecting ~2500ms of quota usage; I got 3731 ms of cpu usage which was an unexpected overage of 1231ms. Is that a > reasonable theory? I think I've figured out what's going on here (and a related issue that gave me some inconsistency when trying to debug it): other "slow" threads can wake up while the slack timer is in distribute and double-spend some runtime. Since we lsub_positive rather than allow cfs_b->runtime to be negative this double-spending is permanent, and can go on indefinitely. In addition, if things fall out in a slightly different way, all the "slow" threads can wind up getting on cpu and claiming slices of runtime before the "fast" thread, and then it just has to wait another slack period to hope that the ordering winds up better that time. This just depends on things like IPI latency and maybe what order things happened to happen at the start of the period. Ugh. Maybe we /do/ just give up and say that most people don't seem to be using cfs_b in a way that expiration of the leftover 1ms matters.