On Tue, Sep 12, 2023 at 10:23:02AM +0200, Boris Brezillon wrote: > On Mon, 11 Sep 2023 19:16:08 -0700 > Matthew Brost <matthew.brost@xxxxxxxxx> wrote: > > > Add generic schedule message interface which sends messages to backend > > from the drm_gpu_scheduler main submission thread. The idea is some of > > these messages modify some state in drm_sched_entity which is also > > modified during submission. By scheduling these messages and submission > > in the same thread their is not race changing states in > > drm_sched_entity. > > > > This interface will be used in Xe, new Intel GPU driver, to cleanup, > > suspend, resume, and change scheduling properties of a drm_sched_entity. > > > > The interface is designed to be generic and extendable with only the > > backend understanding the messages. > > I didn't follow the previous discussions closely enough, but it seemed > to me that the whole point of this 'ordered-wq for scheduler' approach > was so you could interleave your driver-specific work items in the > processing without changing the core. This messaging system looks like > something that could/should be entirely driver-specific to me, and I'm > not convinced this thin 'work -> generic_message_callback' layer is > worth it. You can simply have your own xe_msg_process work, and a > xe_msg_send helper that schedules this work. Assuming other drivers > need this messaging API, they'll probably have their own message ids > and payloads, and the automation done here is simple enough that it can > be duplicated. That's just my personal opinion, of course, and if > others see this message interface as valuable, I fine with it. Good point. I am fine deleting this from the scheduler and making this driver specific. Matt