> On Wed, Jan 24, 2024 at 02:08:44PM +0530, Akhil R wrote: > > > > +static void tegra_sha_init_fallback(struct tegra_sha_ctx *ctx, const char > *algname) > > +{ > > + ctx->fallback_tfm = crypto_alloc_ahash(algname, 0, CRYPTO_ALG_ASYNC | > > + CRYPTO_ALG_NEED_FALLBACK); > > + > > + if (IS_ERR(ctx->fallback_tfm)) { > > + dev_warn(ctx->se->dev, "failed to allocate fallback for %s %ld\n", > > + algname, PTR_ERR(ctx->fallback_tfm)); > > + ctx->fallback_tfm = NULL; > > + } > > +} > > This should check that the fallback state size is smaller than > that of tegra. As otherwise the fallback export/import will break. Okay. Got it. Will update. > > +static int tegra_sha_import(struct ahash_request *req, const void *in) > > +{ > > + struct tegra_sha_reqctx *rctx = ahash_request_ctx(req); > > + struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); > > + struct tegra_sha_ctx *ctx = crypto_ahash_ctx(tfm); > > + int i; > > + > > + if (ctx->fallback) > > + return tegra_sha_fallback_import(req, in); > > + > > + memcpy(rctx, in, sizeof(*rctx)); > > + > > + /* Paste all intermediate results */ > > + for (i = 0; i < HASH_RESULT_REG_COUNT; i++) > > + writel(rctx->result[i], > > + ctx->se->base + ctx->se->hw->regs->result + (i * 4)); > > What happens when multiple requests of the same tfm import at > the same time? Normally we don't actually touch the hardware > in the import function. Instead, all the hard work happens at > the end of the update function, which moves hardware state into > the request object. > > The import/export function then simply copies the request object > state to the in/out buffer. Understood the issue. But I feel it will be a bit overburden for the update() to copy/paste these for every call. Let me explore more on the hardware and come back with a better approach. Thanks for the comments. Regards, Akhil