On 10/02/2025 12:01, Maurizio Lombardi wrote:
On Mon Feb 10, 2025 at 8:41 AM CET, zhang.guanghui@xxxxxxxx wrote:
Hello
I guess you have to fix your mail client.
When using the nvme-tcp driver in a storage cluster, the driver may trigger a null pointer causing the host to crash several times.
By analyzing the vmcore, we know the direct cause is that the request->mq_hctx was used after free.
CPU1 CPU2
nvme_tcp_poll nvme_tcp_try_send --failed to send reqrest 13
This simply looks like a race condition between nvme_tcp_poll() and nvme_tcp_try_send()
Personally, I would try to fix it inside the nvme-tcp driver without
touching the core functions.
Maybe nvme_tcp_poll should just ensure that io_work completes before
calling nvme_tcp_try_recv(), the POLLING flag should then prevent io_work
from getting rescheduled by the nvme_tcp_data_ready() callback.
Maurizio
It seems to me that the HOST_PATH_ERROR handling can be improved in
nvme-tcp.
In nvme-rdma we use nvme_host_path_error(rq) and nvme_cleanup_cmd(rq) in
case we fail to submit a command..
can you try to replacing nvme_tcp_end_request(blk_mq_rq_from_pdu(req),
NVME_SC_HOST_PATH_ERROR); call with the similar logic we use in
nvme-rdma for host path error handling ?