On Thu, Sep 24 2020 at 08:32, Steven Rostedt wrote: > On Thu, 24 Sep 2020 08:57:52 +0200 > Thomas Gleixner <tglx@xxxxxxxxxxxxx> wrote: > >> > Now as for migration disabled nesting, at least now we would have >> > groupings of this, and perhaps the theorists can handle that. I mean, >> > how is this much different that having a bunch of tasks blocked on a >> > mutex with the owner is pinned on a CPU? >> > >> > migrate_disable() is a BKL of pinning affinity. >> >> No. That's just wrong. preempt disable is a concurrency control, > > I think you totally misunderstood what I was saying. The above wasn't about > comparing preempt_disable to migrate_disable. It was comparing > migrate_disable to a chain of tasks blocked on mutexes where the top owner > has preempt_disable set. You still have a bunch of tasks that can't move to > other CPUs. What? The top owner does not prevent any task from moving. The tasks cannot move because they are blocked on the mutex, which means they are not runnable and non runnable tasks are not migrated at all. I really don't understand what you are trying to say. >> > If we only have local_lock() available (even on !RT), then it makes >> > the blocking in groups. At least this way you could grep for all the >> > different local_locks in the system and plug that into the algorithm >> > for WCS, just like one would with a bunch of mutexes. >> >> You cannot do that on RT at all where migrate disable is substituting >> preempt disable in spin and rw locks. The result would be the same as >> with a !RT kernel just with horribly bad performance. > > Note, the spin and rwlocks already have a lock associated with them. Why > would it be any different on RT? I wasn't suggesting adding another lock > inside a spinlock. Why would I recommend THAT? I wasn't recommending > blindly replacing migrate_disable() with local_lock(). I just meant expose > local_lock() but not migrate_disable(). We already exposed local_lock() to non RT and it's for places which do preempt_disable() or local_irq_disable() without having a lock associated. But both primitives are scope less and therefore behave like CPU local BKLs. What local_lock() provides in these cases is: - Making the protection scope clear by associating a named local lock which is coverred by lockdep. - It still maps to preempt_disable() or local_irq_disable() in !RT kernels - The scope and the named lock allows RT kernels to substitute with real (recursion aware) locking primitives which keep preemption and interupts enabled, but provide the fine grained protection for the scoped critical section. So how would you substitute migrate_disable() with a local_lock()? You can't. Again migrate_disable() is NOT a concurrency control and therefore it cannot be substituted by any concurrency control primitive. >> That means the stacking problem has to be solved anyway. >> >> So why on earth do you want to create yet another special duct tape case >> for kamp_local() which proliferates inconsistency instead of aiming for >> consistency accross all preemption models? > > The idea was to help with the scheduling issue. I don't see how mixing concepts and trying to duct tape a problem which is clearly in the realm of the scheduler, i.e. load balancing and placing algorithms, is helpful. Thanks, tglx _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/intel-gfx