On Mon, Jan 30, 2023 at 4:26 PM Tejun Heo <tj@xxxxxxxxxx> wrote: > > Hello, > > On Mon, Jan 30, 2023 at 01:38:15PM -0800, Josh Don wrote: > > > The core-sched support is composed of the following parts: > > > > Thanks, this looks pretty reasonable overall. > > > > One meta comment is that I think we can shortcircuit from > > touch_core_sched when we have sched_core_disabled(). > > Yeah, touch_core_sched() is really cheap (it's just an assignment from an rq > field to a task field) but sched_core_disabled() is also just a static > branch. Will update. Yep, true, I was just going through and reasoning about whether anything needed to be done in the !sched_core_disabled() case. > > Reviewed-by: Josh Don <joshdon@xxxxxxxxxx> > > > > > + /* > > > + * While core-scheduling, rq lock is shared among > > > + * siblings but the debug annotations and rq clock > > > + * aren't. Do pinning dance to transfer the ownership. > > > + */ > > > + WARN_ON_ONCE(__rq_lockp(rq) != __rq_lockp(srq)); > > > + rq_unpin_lock(rq, rf); > > > + rq_pin_lock(srq, &srf); > > > + > > > + update_rq_clock(srq); > > > > Unfortunate that we have to do this superfluous update; maybe we can > > save/restore the clock flags from before the pinning shenanigans? > > So, this one isn't really superflous. There are two rq's involved - self and > sibling. self's rq clock is saved and restored through rq_unpin_lock() and > rq_repin_lock(). We're transferring the lock owner ship from self to sibling > without actually unlocking and relocking the lock as they should be sharing > the same lock; however, that doesn't mean that the two queues share rq > clocks, so the sibling needs to update its rq clock upon getting the lock > transferred to it. It might make sense to make the siblings share the rq > clock when core-sched is enabled but that's probably for some other time. Yep, whoops, I forgot that part didn't make it.