On 3/19/2018 11:25 AM, Herbert Xu wrote: > On Mon, Mar 19, 2018 at 06:39:50AM +0000, Horia Geantă wrote: >> >> The fact that there can be multiple requests in parallel (for a given tfm) is a >> different topic. >> Each request object has its state in its own state machine, independent from the >> other request objects. >> I assume this is clear enough. > > My point is that all of the state associated with a request needs > to be stored in the request object. If you're start storing things > in the driver/hardware, then things will get ugly one way or another. > Agree, the request state should be stored in the request object; I am not debating that. Still there are limitations even when keeping state in the request object. For e.g. an implementation cannot DMA map a buffer for the entire lifetime of a request object, because this lifetime is unknown - user can "abandon" the object after a few .update() calls, or even after .init(). By "abandon" I mean not call _ever_ any of .final(), .finup() or .export() on the object. The only solution to avoid leaks in this case is to repeatedly DMA map & unmap the buffer. IOW, if one wants to load/save HW state in a buffer after an .update() and to instruct the crypto engine to do this operation, the following steps are involved: -gpp: DMA map the buffer, get its IOVA -gpp: program the crypto engine with IOVA, wait for crypto engine's signal -crypto engine: load HW state from buffer, perform the partial hash, save HW state in buffer, signal gpp -gpp: DMA unmap the buffer I'd say this is pretty inefficient, yet I don't see an alternative. Good or bad, the documentation should reflect this limitation - hence this patch. Thanks, Horia