Quoting Lionel Landwerlin (2019-07-05 14:06:25) > On 05/07/2019 15:54, Chris Wilson wrote: > > Quoting Lionel Landwerlin (2019-07-05 13:42:51) > >> Wow nice. I didn't have the courage to actually write it, knowing how > >> easy it could be to screw an offset and write at random in GGTT... > >> > >> I only have one concern below. > >> > >> Thanks a lot, > >> > >> -Lionel > >> > >> On 05/07/2019 15:30, Chris Wilson wrote: > >>> CTX_REG(reg_state, > >>> @@ -1692,6 +1693,150 @@ gen8_update_reg_state_unlocked(struct intel_context *ce, > >>> intel_sseu_make_rpcs(i915, &ce->sseu)); > >>> } > >>> > >>> +struct flex { > >>> + i915_reg_t reg; > >>> + u32 offset; > >>> + u32 value; > >>> +}; > >>> + > >>> +static int > >>> +gen8_store_flex(struct i915_request *rq, > >>> + struct intel_context *ce, > >>> + const struct flex *flex, unsigned int count) > >>> +{ > >>> + u32 offset; > >>> + u32 *cs; > >>> + > >>> + cs = intel_ring_begin(rq, 4 * count); > >>> + if (IS_ERR(cs)) > >>> + return PTR_ERR(cs); > >> > >> Is the right of the kernel context large enough to hold the MI_SDIs for > >> all the contexts? > > At the moment we are using 9 registers, add in say 128 bytes tops for > > the request overhead, that's <300 bytes. The kernel context uses 4k > > rings, so enough for a few updates before we have to flush. We may have > > to wait for external rings to idle and be interrupt -- but that's the > > same as before, including the chance that we may be interrupted in the > > middle of conversion. > > > > The worst case is that we overrun the ring and we should get a juicy > > warning (GEM_BUG_ON or -ENOSPC) in that case. We can increase the > > kernel_context ring if that's an issue or just fallback to suballocating > > a batchbuffer for the updates. > > > Ah, thanks. I didn't notice it was one request per context to reconfigure. > > Still I wouldn't want to have this fail somewhat randomly because > something else stayed on the HW just a bit too long. Same problem we have now, since the wait-for-idle may randomly be interrupted (as may the mapping of the context images). > Maybe checking the available space in the ring in > gen8_configure_all_contexts() and wait on last_request if there isn't > enough? The wait is automatic by virtue of intel_ring_begin. The error I was alluding to before is that if we try and create a packet (intel_ring_begin) too large, it will fail an assert. The number of registers we need to write should have an upper bound, we should be able to spot a problem before it happens. -Chris _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/intel-gfx