* Thomas Gleixner <tglx@xxxxxxxxxxxxx> wrote: > On Mon, Sep 18 2023 at 20:21, Andy Lutomirski wrote: > > On Wed, Aug 30, 2023, at 11:49 AM, Ankur Arora wrote: > > > Why do we support anything other than full preempt? I can think of > > two reasons, neither of which I think is very good: > > > > 1. Once upon a time, tracking preempt state was expensive. But we fixed that. > > > > 2. Folklore suggests that there's a latency vs throughput tradeoff, > > and serious workloads, for some definition of serious, want > > throughput, so they should run without full preemption. > > It's absolutely not folklore. Run to completion is has well known > benefits as it avoids contention and avoids the overhead of scheduling > for a large amount of scenarios. > > We've seen that painfully in PREEMPT_RT before we came up with the > concept of lazy preemption for throughput oriented tasks. Yeah, for a large majority of workloads reduction in preemption increases batching and improves cache locality. Most scalability-conscious enterprise users want longer timeslices & better cache locality, not shorter timeslices with spread out cache use. There's microbenchmarks that fit mostly in cache that benefit if work is immediately processed by freshly woken tasks - but that's not true for most workloads with a substantial real-life cache footprint. Thanks, Ingo