Re: [PATCH v3 02/13] drm/sched: Convert drm scheduler to use a work queue rather than kthread

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2023-09-12 11:02, Matthew Brost wrote:
> On Tue, Sep 12, 2023 at 09:29:53AM +0200, Boris Brezillon wrote:
>> On Mon, 11 Sep 2023 19:16:04 -0700
>> Matthew Brost <matthew.brost@xxxxxxxxx> wrote:
>>
>>> @@ -1071,6 +1063,7 @@ static int drm_sched_main(void *param)
>>>   *
>>>   * @sched: scheduler instance
>>>   * @ops: backend operations for this scheduler
>>> + * @submit_wq: workqueue to use for submission. If NULL, the system_wq is used
>>>   * @hw_submission: number of hw submissions that can be in flight
>>>   * @hang_limit: number of times to allow a job to hang before dropping it
>>>   * @timeout: timeout value in jiffies for the scheduler
>>> @@ -1084,14 +1077,16 @@ static int drm_sched_main(void *param)
>>>   */
>>>  int drm_sched_init(struct drm_gpu_scheduler *sched,
>>>  		   const struct drm_sched_backend_ops *ops,
>>> +		   struct workqueue_struct *submit_wq,
>>>  		   unsigned hw_submission, unsigned hang_limit,
>>>  		   long timeout, struct workqueue_struct *timeout_wq,
>>>  		   atomic_t *score, const char *name, struct device *dev)
>>>  {
>>> -	int i, ret;
>>> +	int i;
>>>  	sched->ops = ops;
>>>  	sched->hw_submission_limit = hw_submission;
>>>  	sched->name = name;
>>> +	sched->submit_wq = submit_wq ? : system_wq;
>>
>> My understanding is that the new design is based on the idea of
>> splitting the drm_sched_main function into work items that can be
>> scheduled independently so users/drivers can insert their own
>> steps/works without requiring changes to drm_sched. This approach is
>> relying on the properties of ordered workqueues (1 work executed at a
>> time, FIFO behavior) to guarantee that these steps are still executed
>> in order, and one at a time.
>>
>> Given what you're trying to achieve I think we should create an ordered
>> workqueue instead of using the system_wq when submit_wq is NULL,
>> otherwise you lose this ordering/serialization guarantee which both
>> the dedicated kthread and ordered wq provide. It will probably work for
>> most drivers, but might lead to subtle/hard to spot ordering issues.
>>
> 
> I debated chosing between a system_wq or creating an ordered-wq by
> default myself. Indeed using the system_wq by default subtlety changes
> the behavior as run_job & free_job workers can run in parallel. To be
> safe, agree the default use be an ordered-wq. If drivers are fine with
> run_job() and free_job() running in parallel, they are free to set
> submit_wq == system_wq. Will change in next rev.
> 
> Matt

So, yes, this is very good--do make that change. However, in case
of parallelism between run_job() and free_job(), perhaps we should
have a function parameter, to control this, and then internally,
we decide whether to use system_wq (perhaps not) or our own
workqueue which is just not ordered. This will give us some flexibility
should we need to have better control/reporting/etc., of our workqueue.
-- 
Regards,
Luben

> 
>>>  	sched->timeout = timeout;
>>>  	sched->timeout_wq = timeout_wq ? : system_wq;
>>>  	sched->hang_limit = hang_limit;
>>> @@ -1100,23 +1095,15 @@ int drm_sched_init(struct drm_gpu_scheduler *sched,
>>>  	for (i = DRM_SCHED_PRIORITY_MIN; i < DRM_SCHED_PRIORITY_COUNT; i++)
>>>  		drm_sched_rq_init(sched, &sched->sched_rq[i]);
>>>  
>>> -	init_waitqueue_head(&sched->wake_up_worker);
>>>  	init_waitqueue_head(&sched->job_scheduled);
>>>  	INIT_LIST_HEAD(&sched->pending_list);
>>>  	spin_lock_init(&sched->job_list_lock);
>>>  	atomic_set(&sched->hw_rq_count, 0);
>>>  	INIT_DELAYED_WORK(&sched->work_tdr, drm_sched_job_timedout);
>>> +	INIT_WORK(&sched->work_submit, drm_sched_main);
>>>  	atomic_set(&sched->_score, 0);
>>>  	atomic64_set(&sched->job_id_count, 0);
>>> -
>>> -	/* Each scheduler will run on a seperate kernel thread */
>>> -	sched->thread = kthread_run(drm_sched_main, sched, sched->name);
>>> -	if (IS_ERR(sched->thread)) {
>>> -		ret = PTR_ERR(sched->thread);
>>> -		sched->thread = NULL;
>>> -		DRM_DEV_ERROR(sched->dev, "Failed to create scheduler for %s.\n", name);
>>> -		return ret;
>>> -	}
>>> +	sched->pause_submit = false;
>>>  
>>>  	sched->ready = true;
>>>  	return 0;




[Index of Archives]     [Linux DRI Users]     [Linux Intel Graphics]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux