Hi! Before going any further with this I'd like to check whether this is an acceptable way to go. Background: GPU buffer objects in general and vmware svga GPU buffers in particular are mapped by user-space using MIXEDMAP or PFNMAP. Sometimes the address space is backed by a set of pages, sometimes it's backed by PCI memory. In the latter case in particular, there is no way to track dirty regions using page_mkwrite() and page_mkclean(), other than allocating a bounce buffer and perform dirty tracking on it, and then copy data to the real GPU buffer. This comes with a big memory- and performance overhead. So I'd like to add the following infrastructure with a callback pfn_mkwrite() and a function mkclean_mapping_range(). Typically we will be cleaning a range of ptes rather than random ptes in a vma. This comes with the extra benefit of being usable when the backing memory of the GPU buffer is not coherent with the GPU itself, and where we either need to flush caches or move data to synchronize. So this is a RFC for 1) The API. Is it acceptable? Any other suggestions if not? 2) Modifying apply_to_page_range(). Better to make a standalone non-populating version? 3) tlb- mmu- and cache-flushing calls. I've looked at unmap_mapping_range() and page_mkclean_one() to try to get it right, but still unsure. Thanks, Thomas Hellström -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>