Hi, Dave,
On 10/14/21 03:50, Dave Airlie wrote:
On Fri, 8 Oct 2021 at 23:36, Thomas Hellström
<thomas.hellstrom@xxxxxxxxxxxxxxx> wrote:
This patch series introduces failsafe migration blits.
The reason for this seemingly strange concept is that if the initial
clearing or readback of LMEM fails for some reason, and we then set up
either GPU- or CPU ptes to the allocated LMEM, we can expose old
contents from other clients.
Can we enumerate "for some reason" here?
This feels like "security" with no defined threat model. Maybe if the
cover letter contains more details on the threat model it would make
more sense.
TBH, I'd be quite happy if we could find a way to skip this series (or
even a reworked version) completely.
Assuming that the migration request setup code is bug-free enough to not
never cause an engine reset, there are at least two ways I can see the
migration fail:
1) The migration fence we will be depending on when fully async
(ttm->moving) may signal with error after the following:
malicious_batchbuffer_causing_reset -> async eviction -> allocation ->
async clearing
2) malicious_batchbuffers_causing_gt_wedge submitted to copy engine ->
migration_blit submitted to copy_engine. If wedging the gt, the
migration blit will never be executed, fence->error will end up with
-EIO but TTM will happily fault the pages to user-space.
Now we had other versions around looking at the ttm_bo->moving errors at
vma binding and cpu faulting, but this was the direction chosen after
discussions with our arch team. Either way we'd probably want to block
the error propagation after async_eviction.
I can of course add 1) and 2) above to the cover-letter, but if you have
any additional input on the best way to handle this, that'd be appreciated.
Thanks,
Thomas
Dave.