On Tue, Jul 13, 2021 at 6:14 PM Matthew Auld <matthew.william.auld@xxxxxxxxx> wrote: > On Tue, 13 Jul 2021 at 16:55, Ville Syrjälä > <ville.syrjala@xxxxxxxxxxxxxxx> wrote: > > > > On Tue, Jul 13, 2021 at 11:45:50AM +0100, Matthew Auld wrote: > > > + /** > > > + * @cache_coherent: > > > + * > > > + * Track whether the pages are coherent with the GPU if reading or > > > + * writing through the CPU cache. > > > + * > > > + * This largely depends on the @cache_level, for example if the object > > > + * is marked as I915_CACHE_LLC, then GPU access is coherent for both > > > + * reads and writes through the CPU cache. > > > + * > > > + * Note that on platforms with shared-LLC support(HAS_LLC) reads through > > > + * the CPU cache are always coherent, regardless of the @cache_level. On > > > + * snooping based platforms this is not the case, unless the full > > > + * I915_CACHE_LLC or similar setting is used. > > > + * > > > + * As a result of this we need to track coherency separately for reads > > > + * and writes, in order to avoid superfluous flushing on shared-LLC > > > + * platforms, for reads. > > > + * > > > + * I915_BO_CACHE_COHERENT_FOR_READ: > > > + * > > > + * When reading through the CPU cache, the GPU is still coherent. Note > > > + * that no data has actually been modified here, so it might seem > > > + * strange that we care about this. > > > + * > > > + * As an example, if some object is mapped on the CPU with write-back > > > + * caching, and we read some page, then the cache likely now contains > > > + * the data from that read. At this point the cache and main memory > > > + * match up, so all good. But next the GPU needs to write some data to > > > + * that same page. Now if the @cache_level is I915_CACHE_NONE and the > > > + * the platform doesn't have the shared-LLC, then the GPU will > > > + * effectively skip invalidating the cache(or however that works > > > + * internally) when writing the new value. This is really bad since the > > > + * GPU has just written some new data to main memory, but the CPU cache > > > + * is still valid and now contains stale data. As a result the next time > > > + * we do a cached read with the CPU, we are rewarded with stale data. > > > + * Likewise if the cache is later flushed, we might be rewarded with > > > + * overwriting main memory with stale data. > > > + * > > > + * I915_BO_CACHE_COHERENT_FOR_WRITE: > > > + * > > > + * When writing through the CPU cache, the GPU is still coherent. Note > > > + * that this also implies I915_BO_CACHE_COHERENT_FOR_READ. > > > + * > > > + * This is never set when I915_CACHE_NONE is used for @cache_level, > > > + * where instead we have to manually flush the caches after writing > > > + * through the CPU cache. For other cache levels this should be set and > > > + * the object is therefore considered coherent for both reads and writes > > > + * through the CPU cache. > > > > I don't remember why we have this read vs. write split and this new > > documentation doesn't seem to really explain it either. > > Hmm, I attempted to explain that earlier: > > * Note that on platforms with shared-LLC support(HAS_LLC) reads through > * the CPU cache are always coherent, regardless of the @cache_level. On > * snooping based platforms this is not the case, unless the full > * I915_CACHE_LLC or similar setting is used. > * > * As a result of this we need to track coherency separately for reads > * and writes, in order to avoid superfluous flushing on shared-LLC > * platforms, for reads. > > So AFAIK it's just because shared-LLC can be coherent for reads, while > also not being coherent for writes(CACHE_NONE), so being able to track > each separately is kind of needed to avoid unnecessary flushing for > the read cases i.e simple boolean for coherent vs non-coherent is not > enough. > > I can try to reword things to make that more clear. Maybe highlight the security aspect a bit more: When reads are always coherent, we don't have to force the clflush. If reads are not coherent we must ensure that the clflush has finished before userspace can get at the backing storage, like writing ptes and similar things. Writes otoh can only result in userspace eating cacheling corruption if it races against the kernel (by e.g. trying to predict where we'll bind a buffer and issuing gpu access to that location before the buffer is actually bound from some other engine in parallel with an execbuf that binds the buffer). Atm we don't do a great job with that, but that's something that I think is getting looked into. -Daniel > > Is it for optimizing some display related case where we can omit the > > invalidates but still have to do the writeback to keep the display > > engine happy? > > > > -- > > Ville Syrjälä > > Intel -- Daniel Vetter Software Engineer, Intel Corporation http://blog.ffwll.ch