On Tue, Sep 10, 2019 at 04:59:57PM +0200, Noralf Trønnes wrote: > > > Den 10.09.2019 15.51, skrev Thomas Zimmermann: > > Hi > > > > Am 10.09.19 um 15:34 schrieb Noralf Trønnes: > >> > >> > >> Den 10.09.2019 14.48, skrev Thomas Zimmermann: > >>> Hi > >>> > >>> Am 10.09.19 um 13:52 schrieb Gerd Hoffmann: > >>>> On Mon, Sep 09, 2019 at 04:06:32PM +0200, Thomas Zimmermann wrote: > >>>>> Before updating the display from the console's shadow buffer, the dirty > >>>>> worker now waits for vblank. This allows several screen updates to pile > >>>>> up and acts as a rate limiter. > >>>>> > >>>>> Signed-off-by: Thomas Zimmermann <tzimmermann@xxxxxxx> > >>>>> --- > >>>>> drivers/gpu/drm/drm_fb_helper.c | 12 ++++++++++++ > >>>>> 1 file changed, 12 insertions(+) > >>>>> > >>>>> diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c > >>>>> index a7ba5b4902d6..017e2f6bd1b9 100644 > >>>>> --- a/drivers/gpu/drm/drm_fb_helper.c > >>>>> +++ b/drivers/gpu/drm/drm_fb_helper.c > >>>>> @@ -402,8 +402,20 @@ static void drm_fb_helper_dirty_work(struct work_struct *work) > >>>>> dirty_work); > >>>>> struct drm_clip_rect *clip = &helper->dirty_clip; > >>>>> struct drm_clip_rect clip_copy; > >>>>> + struct drm_crtc *crtc; > >>>>> unsigned long flags; > >>>>> void *vaddr; > >>>>> + int ret; > >>>>> + > >>>>> + /* rate-limit update frequency */ > >>>>> + mutex_lock(&helper->lock); > >>>>> + crtc = helper->client.modesets[0].crtc; > >>>>> + ret = drm_crtc_vblank_get(crtc); > >>>>> + if (!ret) { > >>>>> + drm_crtc_wait_one_vblank(crtc); > >>>>> + drm_crtc_vblank_put(crtc); > >>>>> + } > >>>>> + mutex_unlock(&helper->lock); > >>>> > >>>> Hmm, not sure it is the best plan to sleep for a while in the worker, > >>>> especially while holding the lock. > >>>> > >>>> What does the lock protect against here? Accessing > >>> > >>> This lock is hold by the fbdev code during mode-setting operations (but > >>> not drawing operations). So *crtc might be gone if we don't hold it here. > >>> > >>>> helper->client.modesets? If so then you can unlock before going to > >>>> sleep in drm_crtc_wait_one_vblank() I think. > >>> > >>> I looked, but I cannot find any code that protects crtc while vblank is > >>> active. I'd rather not change the current code until someone can clarify. > >>> > >> > >> The client->modesets array and the crtc struct member are invariant over > >> the lifetime of client (drm_client_modeset_create()). No need to take a > >> lock for access. See drm_client_modeset_release() for the things that > >> _can_ change and needs protection (protected by client->modeset_mutex). > > > > Thanks for the reply. So we don't need the lock? But why does > > drm_fb_helper_ioctl() take it? ioctl exclusiveness? > > > > Because of drm_master_internal_acquire() it's necessary to take the lock > first. That's the locking rules of drm_fb_helper. First take the fb > helper lock, then the dev->master_mutex. We stay away if there's a > userspace master and if there's none, we prevent userspace from becoming > master while we do stuff. > > But looking at drm_fb_helper_ioctl() now, I see that it's not strictly > necessary to do this since all this function can do is vblank wait and > that's fine even if userspace is master. The locking was necessary > before I refactored and moved stuff to drm_client, because at that time > the modeset array was deleted and recreated when probing connectors. > But it doesn't hurt either in case more functionality is added to the > ioctl. One wouldn't think that would ever happen, since fbdev is going > away soon, but still we keep polishing it ;) fbdev drivers are hopefully disappearing, I don't think fbdev as the uapi interface will disappear soon. Hence why it's still somewhat reasonable to keep polishing it imo. It should actually help in convincing people to move their fbdev driver over to drm, if that gives them a more polished fbdev implementation :-) -Daniel > > Noralf. > > >> I don't see a problem with sleeping in the worker though, but I might > >> miss out on something. AFAICS changes will just pile up in >dirty_clip > >> and the worker will be scheduled for a new run to happen when it's done > >> with the current update. > > > > Yes, that's the intention of the patch. We hope to reduce the number of > > memcpys by handling several of them at once. > > > > Best regards > > Thomas > > > >> > >> Noralf. > >> > >>> Best regards > >>> Thomas > >>> > >>>> > >>>> cheers, > >>>> Gerd > >>>> > >>> > > -- Daniel Vetter Software Engineer, Intel Corporation http://blog.ffwll.ch _______________________________________________ dri-devel mailing list dri-devel@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/dri-devel