On Fri, Aug 30, 2024 at 06:17:17PM +0800, Chenghai Huang wrote: > Apply for a lock before the qp send operation to ensure no > resource competition in multi-concurrency situations. > > This modification has almost no impact on performance. > > Signed-off-by: Chenghai Huang <huangchenghai2@xxxxxxxxxx> > --- > drivers/crypto/hisilicon/hpre/hpre_crypto.c | 2 ++ > drivers/crypto/hisilicon/zip/zip_crypto.c | 3 +++ > 2 files changed, 5 insertions(+) > > diff --git a/drivers/crypto/hisilicon/hpre/hpre_crypto.c b/drivers/crypto/hisilicon/hpre/hpre_crypto.c > index 764532a6ca82..c167dbd6c7d6 100644 > --- a/drivers/crypto/hisilicon/hpre/hpre_crypto.c > +++ b/drivers/crypto/hisilicon/hpre/hpre_crypto.c > @@ -575,7 +575,9 @@ static int hpre_send(struct hpre_ctx *ctx, struct hpre_sqe *msg) > > do { > atomic64_inc(&dfx[HPRE_SEND_CNT].value); > + spin_lock_bh(&ctx->req_lock); > ret = hisi_qp_send(ctx->qp, msg); > + spin_unlock_bh(&ctx->req_lock); > if (ret != -EBUSY) > break; > atomic64_inc(&dfx[HPRE_SEND_BUSY_CNT].value); > diff --git a/drivers/crypto/hisilicon/zip/zip_crypto.c b/drivers/crypto/hisilicon/zip/zip_crypto.c > index 94e2d66b04b6..e3a31e3416be 100644 > --- a/drivers/crypto/hisilicon/zip/zip_crypto.c > +++ b/drivers/crypto/hisilicon/zip/zip_crypto.c > @@ -213,6 +213,7 @@ static int hisi_zip_do_work(struct hisi_zip_qp_ctx *qp_ctx, > { > struct hisi_acc_sgl_pool *pool = qp_ctx->sgl_pool; > struct hisi_zip_dfx *dfx = &qp_ctx->zip_dev->dfx; > + struct hisi_zip_req_q *req_q = &qp_ctx->req_q; > struct acomp_req *a_req = req->req; > struct hisi_qp *qp = qp_ctx->qp; > struct device *dev = &qp->qm->pdev->dev; > @@ -244,7 +245,9 @@ static int hisi_zip_do_work(struct hisi_zip_qp_ctx *qp_ctx, > > /* send command to start a task */ > atomic64_inc(&dfx->send_cnt); > + write_lock(&req_q->req_lock); > ret = hisi_qp_send(qp, &zip_sqe); > + write_unlock(&req_q->req_lock); Hi Chenghai, Thanks for your patch. Since Herbert has already applied a patch [1] changing rw_lock to spinlock in the hisilicon zip controller driver, applying your patch might cause a conflict. Could you rebase on Herbert's crypto tree and update write_lock() and write_unlock() to spin_lock() and spin_unlock(), respectively? [1]: https://lore.kernel.org/lkml/20240823183856.561166-1-visitorckw@xxxxxxxxx/ Regards, Kuan-Wei > if (unlikely(ret < 0)) { > atomic64_inc(&dfx->send_busy_cnt); > ret = -EAGAIN; > -- > 2.33.0 > >