On Thu, Dec 10, 2015 at 06:51:23PM +0000, Dave Gordon wrote: > In various places, a single page of a (regular) GEM object is mapped into > CPU address space and updated. In each such case, either the page or the > the object should be marked dirty, to ensure that the modifications are > not discarded if the object is evicted under memory pressure. > > The typical sequence is: > va = kmap_atomic(i915_gem_object_get_page(obj, pageno)); > *(va+offset) = ... > kunmap_atomic(va); > > Here we introduce i915_gem_object_get_dirty_page(), which performs the > same operation as i915_gem_object_get_page() but with the side-effect > of marking the returned page dirty in the pagecache. This will ensure > that if the object is subsequently evicted (due to memory pressure), > the changes are written to backing store rather than discarded. > > Note that it works only for regular (shmfs-backed) GEM objects, but (at > least for now) those are the only ones that are updated in this way -- > the objects in question are contexts and batchbuffers, which are always > shmfs-backed. > > Separate patches deal with the cases where whole objects are (or may > be) dirtied. > > v3: Mark two more pages dirty in the page-boundary-crossing > cases of the execbuffer relocation code [Chris Wilson] > > Signed-off-by: Dave Gordon <david.s.gordon@xxxxxxxxx> > Cc: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx> Reviewed-by: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx> -Chris -- Chris Wilson, Intel Open Source Technology Centre _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx http://lists.freedesktop.org/mailman/listinfo/intel-gfx