On Fri, Jan 17, 2025 at 02:21:19PM +0800, Herbert Xu wrote: > On Thu, Jan 16, 2025 at 05:54:51PM +0000, Eric Biggers wrote: > > > > But in practice it's the opposite. Making it an ahash forces users who would > > otherwise just be using shash to use ahash instead. > > I think that's a good thing. shash was a mistake and I intend to > get rid of it. I intend to keep using it, as it's a much better API. Even if virtual address support were to be added to ahash, shash will still be a simpler interface that does not require dynamically allocated per-request memory. > If it was possible to shoehorn phmac/paes into a synchronous > algorithm I'd love to do it. But the fact is that this requires > asynchronous communication in the *unlikely* case which means > that the best way to model is with an optional async interface. > > if (unlikely(crypto_ahash_op(...) == -EINPROGRESS)) > do async > else > op completed synchronously Of course it's possible for it to be a synchronous algorithm. The proposed user waits synchronously for the request to complete anyway. > > > That won't solve all the problems with ahash, for example the per-request > > allocation. We'll still end up with something that is worse for 99% of users, > > while also doing a very poor job supporting the 1% (even assuming it's 1% and > > not 0% which it very well might be) who actually think they want off-CPU > > hardware crypto acceleration. Not to mention the nonsense like having > > "synchronous asynchronous hashes". > > I disagree. I want the users to start feeding ahash with more > than one unit of input. So rather than splitting up a folio > into page-size or sector-size units, you feed the whole thing > to ahash and the data should be split up automatically and hashed > in parallel. That sort of fits the {dm,fs}-verity use case specifically, but it seems like a weird hack to avoid the simpler approach that I proposed, and it would introduce unnecessary restrictions like the input data would have to be a folio. > > The primary API needs to be designed around and optimized for software crypto, > > which is what nearly everyone wants. > > I agree with that completely. Yet you continue to push for APIs that are hard to use and slow because they are designed for an obsolete form of hardware offload and have software crypto support bolted on as an afterthought. > However, what triggered this thread is in fact something that is not purely > software crypto because it involves system-wide communication which could take > seconds to complete. Sure, but it's possible for it to use the same API. > The original patch tried to shoehorn this into a synchronous implementation > that would just fail randomly after waiting. The async version has that same timeout, so the effect is exactly the same; the hash operation can fail after a few seconds of waiting. - Eric