On Tue, Nov 23, 2021 at 01:37:58PM +0100, Hannes Reinecke wrote: > Fabrics commands might be sent to all queues, not just the admin one. > > Signed-off-by: Hannes Reinecke <hare@xxxxxxx> > Reviewed-by: Sagi Grimberg <sagi@xxxxxxxxxxx> > Reviewed-by: Himanshu Madhani <himanshu.madhani@xxxxxxxxxx> > --- > drivers/nvme/target/core.c | 2 ++ > 1 file changed, 2 insertions(+) > > diff --git a/drivers/nvme/target/core.c b/drivers/nvme/target/core.c > index 5119c687de68..a3abbf50f7e0 100644 > --- a/drivers/nvme/target/core.c > +++ b/drivers/nvme/target/core.c > @@ -943,6 +943,8 @@ bool nvmet_req_init(struct nvmet_req *req, struct nvmet_cq *cq, > if (unlikely(!req->sq->ctrl)) > /* will return an error for any non-connect command: */ > status = nvmet_parse_connect_cmd(req); > + else if (nvme_is_fabrics(req->cmd)) > + status = nvmet_parse_fabrics_cmd(req); This will allow all fabrics commands on the I/O queue, which is a bad idea. Please please nvmet_parse_fabrics_cmd into nvmet_parse_admin_fabrics_cmd and nvmet_parse_io_fabrics_cmd