On Fri, 2015-10-09 at 09:28 +0200, Daniel Vetter wrote: > > Hm if this still works the same way as on older platforms then pagefaults > just read all 0 and writes go nowhere from the gpu. That generally also > explains ever-increasing numbers of the CS execution pointer since it's > busy churning through 48b worth of address space filled with MI_NOP. I'd > have hoped our hw would do better than that with svm ... I'm looking at simple cases like Jesse's 'gem_svm_fault' test. If the access to process address space (a single dword write) does nothing, I'm not sure why it would then churn through MI_NOOPs; why would the batch still not complete? > If there's really no way to make it hang when we complete the fault then I > guess we'll have to hang it by not completing. Otherwise we'll have to > roll our own fault detection code right from the start. Well, theoretically there are ways we could handle this. It looks like if I *don't* give the IOMMU a response to the page request, the context remains hung and waiting for it. So I could give you a callback, including the 'private' data from the page request that we know identifies the context. So perhaps we *could* contrive to give you precise exceptions when the hardware doesn't really do that sanely. But I was really trying hard to avoid the necessity for that kind of hack to work around stupid hardware :) -- David Woodhouse Open Source Technology Centre David.Woodhouse@xxxxxxxxx Intel Corporation
Attachment:
smime.p7s
Description: S/MIME cryptographic signature
_______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx http://lists.freedesktop.org/mailman/listinfo/intel-gfx