On Thu, Feb 13, 2025 at 07:35:18PM -0800, Eric Biggers wrote: > > It absolutely is designed for an obsolete form of hardware offload. Have you > ever tried actually using it? Here's how to hash a buffer of data with shash: > > return crypto_shash_tfm_digest(tfm, data, size, out) > > ... and here's how to do it with the SHA-256 library, for what it's worth: > > sha256(data, size, out) > > and here's how to do it with ahash: Try the new virt ahash interface, and we could easily put the request object on the stack for sync algorithms: SYNC_AHASH_REQUEST_ON_STACK(req, alg); ahash_request_set_callback(req, CRYPTO_TFM_REQ_MAY_SLEEP, NULL, NULL); ahash_request_set_virt(req, data, out, size); return crypto_ahash_digest(req); > Hmm, I wonder which API users would rather use? You're conflating the SG API problem with the interface itself. It's a separate issue, and quite easily solved. > What? GHASH is a polynomial hash function, so it is easily parallelizable. If > you precompute N powers of the hash key then you can process N blocks in > parallel. Check how the AES-GCM assembly code works; that's exactly what it > does. This is fundamentally different from message digests like SHA-* where the > blocks have to be processed serially. Fair enough. But there are plenty of other users who want batching, such as the zcomp with iaa, and I don't want everybody to invent their own API for the same thing. Cheers, -- Email: Herbert Xu <herbert@xxxxxxxxxxxxxxxxxxx> Home Page: http://gondor.apana.org.au/~herbert/ PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt