This is a note to let you know that I've just added the patch titled drm/cirrus: deal with bo reserve fail in dirty update path to the 3.9-stable tree which can be found at: http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary The filename of the patch is: drm-cirrus-deal-with-bo-reserve-fail-in-dirty-update-path.patch and it can be found in the queue-3.9 subdirectory. If you, or anyone else, feels it should not be added to the stable tree, please let <stable@xxxxxxxxxxxxxxx> know about it. >From f3b2bbdc8a87a080ccd23d27fca4b87d61340dd4 Mon Sep 17 00:00:00 2001 From: Dave Airlie <airlied@xxxxxxxxxx> Date: Thu, 2 May 2013 02:45:02 -0400 Subject: drm/cirrus: deal with bo reserve fail in dirty update path From: Dave Airlie <airlied@xxxxxxxxxx> commit f3b2bbdc8a87a080ccd23d27fca4b87d61340dd4 upstream. Port over the mgag200 fix to cirrus as it suffers the same issue. On F19 testing, it was noticed we get a lot of errors in dmesg about being unable to reserve the buffer when plymouth starts, this is due to the buffer being in the process of migrating, so it makes sense we can't reserve it. In order to deal with it, this adds delayed updates for the dirty updates, when the bo is unreservable, in the normal console case this shouldn't ever happen, its just when plymouth or X is pushing the console bo to system memory. Signed-off-by: Dave Airlie <airlied@xxxxxxxxxx> Signed-off-by: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx> --- drivers/gpu/drm/cirrus/cirrus_drv.h | 2 + drivers/gpu/drm/cirrus/cirrus_fbdev.c | 38 +++++++++++++++++++++++++++++++++- drivers/gpu/drm/cirrus/cirrus_ttm.c | 2 - 3 files changed, 40 insertions(+), 2 deletions(-) --- a/drivers/gpu/drm/cirrus/cirrus_drv.h +++ b/drivers/gpu/drm/cirrus/cirrus_drv.h @@ -154,6 +154,8 @@ struct cirrus_fbdev { struct list_head fbdev_list; void *sysram; int size; + int x1, y1, x2, y2; /* dirty rect */ + spinlock_t dirty_lock; }; struct cirrus_bo { --- a/drivers/gpu/drm/cirrus/cirrus_fbdev.c +++ b/drivers/gpu/drm/cirrus/cirrus_fbdev.c @@ -27,16 +27,51 @@ static void cirrus_dirty_update(struct c int bpp = (afbdev->gfb.base.bits_per_pixel + 7)/8; int ret; bool unmap = false; + bool store_for_later = false; + int x2, y2; + unsigned long flags; obj = afbdev->gfb.obj; bo = gem_to_cirrus_bo(obj); + /* + * try and reserve the BO, if we fail with busy + * then the BO is being moved and we should + * store up the damage until later. + */ ret = cirrus_bo_reserve(bo, true); if (ret) { - DRM_ERROR("failed to reserve fb bo\n"); + if (ret != -EBUSY) + return; + store_for_later = true; + } + + x2 = x + width - 1; + y2 = y + height - 1; + spin_lock_irqsave(&afbdev->dirty_lock, flags); + + if (afbdev->y1 < y) + y = afbdev->y1; + if (afbdev->y2 > y2) + y2 = afbdev->y2; + if (afbdev->x1 < x) + x = afbdev->x1; + if (afbdev->x2 > x2) + x2 = afbdev->x2; + + if (store_for_later) { + afbdev->x1 = x; + afbdev->x2 = x2; + afbdev->y1 = y; + afbdev->y2 = y2; + spin_unlock_irqrestore(&afbdev->dirty_lock, flags); return; } + afbdev->x1 = afbdev->y1 = INT_MAX; + afbdev->x2 = afbdev->y2 = 0; + spin_unlock_irqrestore(&afbdev->dirty_lock, flags); + if (!bo->kmap.virtual) { ret = ttm_bo_kmap(&bo->bo, 0, bo->bo.num_pages, &bo->kmap); if (ret) { @@ -268,6 +303,7 @@ int cirrus_fbdev_init(struct cirrus_devi cdev->mode_info.gfbdev = gfbdev; gfbdev->helper.funcs = &cirrus_fb_helper_funcs; + spin_lock_init(&gfbdev->dirty_lock); ret = drm_fb_helper_init(cdev->dev, &gfbdev->helper, cdev->num_crtc, CIRRUSFB_CONN_LIMIT); --- a/drivers/gpu/drm/cirrus/cirrus_ttm.c +++ b/drivers/gpu/drm/cirrus/cirrus_ttm.c @@ -321,7 +321,7 @@ int cirrus_bo_reserve(struct cirrus_bo * ret = ttm_bo_reserve(&bo->bo, true, no_wait, false, 0); if (ret) { - if (ret != -ERESTARTSYS) + if (ret != -ERESTARTSYS && ret != -EBUSY) DRM_ERROR("reserve failed %p\n", bo); return ret; } Patches currently in stable-queue which might be from airlied@xxxxxxxxxx are queue-3.9/drm-cirrus-deal-with-bo-reserve-fail-in-dirty-update-path.patch -- To unsubscribe from this list: send the line "unsubscribe stable" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html