On Tue, Dec 15, 2015 at 04:26:08PM +0530, ankitprasad.r.sharma@xxxxxxxxx wrote: > From: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx> > +static int > +copy_content(struct drm_i915_gem_object *obj, > + struct drm_i915_private *i915, > + struct address_space *mapping) > +{ > + struct drm_mm_node node; > + int ret, i; > + > + ret = i915_gem_object_set_to_gtt_domain(obj, false); > + if (ret) > + return ret; > + > + /* stolen objects are already pinned to prevent shrinkage */ > + memset(&node, 0, sizeof(node)); > + ret = insert_mappable_node(i915, &node); > + if (ret) > + return ret; > + > + for (i = 0; i < obj->base.size / PAGE_SIZE; i++) { > + struct page *page; > + void *__iomem src; > + void *dst; > + > + i915->gtt.base.insert_page(&i915->gtt.base, > + i915_gem_object_get_dma_address(obj, i), > + node.start, > + I915_CACHE_NONE, > + 0); > + > + page = shmem_read_mapping_page(mapping, i); > + if (IS_ERR(page)) { > + ret = PTR_ERR(page); > + break; > + } > + > + src = io_mapping_map_atomic_wc(i915->gtt.mappable, node.start); > + dst = kmap_atomic(page); > + wmb(); > + memcpy_fromio(dst, src, PAGE_SIZE); > + wmb(); > + kunmap_atomic(dst); > + io_mapping_unmap_atomic(src); > + > + page_cache_release(page); > + } > + > + wmb(); After moving the barriers, this one is redundant. > + i915->gtt.base.clear_range(&i915->gtt.base, > + node.start, node.size, > + true); > + drm_mm_remove_node(&node); > + obj->base.read_domains = I915_GEM_DOMAIN_CPU; > + obj->base.write_domain = I915_GEM_DOMAIN_CPU; On the error path, we shouldn't be marking new domains as the object reverts back to the previous set of pages. If you do a bit of rearranging of the goto err, you could just put return i915_gem_object_set_to_cpu_domain(obj, true); It will be mostly a no-op over the current set read/write domains (but should help in case it ever is not). > +int > +i915_gem_freeze(struct drm_device *dev) > +{ > + /* Called before i915_gem_suspend() when hibernating */ > + struct drm_i915_private *i915 = to_i915(dev); > + struct drm_i915_gem_object *obj, *tmp; > + struct list_head *phase[] = { > + &i915->mm.unbound_list, &i915->mm.bound_list, NULL > + }, **p; > + int ret; > + > + ret = i915_mutex_lock_interruptible(dev); > + if (ret) > + return ret; > + > + /* Across hibernation, the stolen area is not preserved. > + * Anything inside stolen must copied back to normal > + * memory if we wish to preserve it. > + */ > + for (p = phase; *p; p++) { Since we are making changes, might as well push this loop to i915_gem_stolen_freeze() and i915_gem_stolen.c. Probably best to push the i915_gem_object_stolen_migrate_to_shemfs() there as well. -Chris -- Chris Wilson, Intel Open Source Technology Centre _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx http://lists.freedesktop.org/mailman/listinfo/intel-gfx