On 3/3/22 21:58, Thomas Zimmermann wrote: > Store the per-page state for fbdev's deferred I/O in struct > fb_deferred_io_pageref. Maintain a list of pagerefs for the pages > that have to be flushed out to video memory. Update all affected > drivers. > > Like with pages before, fbdev acquires a pageref when an mmaped page > of the framebuffer is being written to. It holds the pageref in a > list of all currently written pagerefs until it flushes the written > pages to video memory. Writeback occurs periodically. After writeback > fbdev releases all pagerefs and builds up a new dirty list until the > next writeback occurs. > > Using pagerefs has a number of benefits. > > For pages of the framebuffer, the deferred I/O code used struct > page.lru as an entry into the list of dirty pages. The lru field is > owned by the page cache, which makes deferred I/O incompatible with > some memory pages (e.g., most notably DRM's GEM SHMEM allocator). > struct fb_deferred_io_pageref now provides an entry into a list of > dirty framebuffer pages, free'ing lru for use with the page cache. > > Drivers also assumed that struct page.index is the page offset into > the framebuffer. This is not true for DRM buffers, which are located > at various offset within a mapped area. struct fb_deferred_io_pageref > explicitly stores an offset into the framebuffer. struct page.index > is now only the page offset into the mapped area. > > These changes will allow DRM to use fbdev deferred I/O without an > intermediate shadow buffer. > As mentioned, this is a very nice cleanup. > Signed-off-by: Thomas Zimmermann <tzimmermann@xxxxxxx> > --- [snip] > diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_fb.c b/drivers/gpu/drm/vmwgfx/vmwgfx_fb.c > index 33f355907fbb..a35f695727c9 100644 > --- a/drivers/gpu/drm/vmwgfx/vmwgfx_fb.c > +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_fb.c > @@ -322,12 +322,13 @@ static void vmw_deferred_io(struct fb_info *info, > struct vmw_fb_par *par = info->par; > unsigned long start, end, min, max; > unsigned long flags; > - struct page *page; > + struct fb_deferred_io_pageref *pageref; > int y1, y2; > > min = ULONG_MAX; > max = 0; > - list_for_each_entry(page, pagelist, lru) { > + list_for_each_entry(pageref, pagelist, list) { > + struct page *page = pageref->page; > start = page->index << PAGE_SHIFT; Do you think that may be worth it to also decouple the struct page.index and store the index as a struct fb_deferred_io_pageref.index field ? It's unlikely that this would change but it feels as if the layers are more separated that way, since no driver will access struct page fields directly. The mapping would be 1:1 anyways just like it's the case for the page offset. In fact, that could allow to even avoid declaring a struct page *page here. [snip] > @@ -340,7 +340,8 @@ static void fbtft_deferred_io(struct fb_info *info, struct list_head *pagelist) > spin_unlock(&par->dirty_lock); > > /* Mark display lines as dirty */ > - list_for_each_entry(page, pagelist, lru) { > + list_for_each_entry(pageref, pagelist, list) { > + struct page *page = pageref->page; Same here and the other drivers. You only need the page for the index AFAICT. Reviewed-by: Javier Martinez Canillas <javierm@xxxxxxxxxx> -- Best regards, Javier Martinez Canillas Linux Engineering Red Hat