On Fri, Aug 07, 2015 at 01:55:01PM +0200, Daniel Vetter wrote: > On Fri, Aug 07, 2015 at 11:10:58AM +0100, Chris Wilson wrote: > > On Fri, Aug 07, 2015 at 10:07:28AM +0200, Daniel Vetter wrote: > > > On Thu, Aug 06, 2015 at 05:43:39PM +0100, Chris Wilson wrote: > > > But it's still salvageable I think since we only care about coherency for > > > the gpu (where data might be stuck in cpu caches). From the cpu's pov (and > > > hence the entire system except the gpu) we should never see inconsistency > > > really - as soon as the gpu does a write to a cacheline it'll win, and > > > before that nothing in the system can assume anything about the contents > > > of these pages. > > > > But the GPU doesn't write to cachelines (except in LLC/snooped+flush). > > The issue is what happens when the user lies about writing to the object > > through a WB cpu mapping (dirtying a cacheline) and the GPU also does. > > Who wins then? > > > > We have postulated that it could be entirely possible for the CPU to > > trust it cache and return local contents and for those to be also > > considered not dirty and so not flushed to memory. Later, we then read > > what the gpu wrote and choas ensues. > > This was just with an eye towards purged memory where we don't care about > correct data anyway. The only thing we care about is that when it's all > overwritten again by someone, that someone should win. And since GEM > assumes new pages are in the cpu domain and clflushes them first that > should hold even for GEM. But the tricky part is that I think we can pull > this off only if the backing storage is purged already. But what's the difference between: lock put_pages purge and lock purge put_pages if you are dismissing the user dirtying CPU cachelines vs already dirty GPU data as being a source of worry? -Chris -- Chris Wilson, Intel Open Source Technology Centre _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx http://lists.freedesktop.org/mailman/listinfo/intel-gfx