Re: [bug report] kmemleak observed with blktests nvme/tcp

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Apr 22, 2024 at 6:46 PM Sagi Grimberg <sagi@xxxxxxxxxxx> wrote:
>
>
>
> On 22/04/2024 7:59, Yi Zhang wrote:
> > On Sun, Apr 21, 2024 at 6:31 PM Sagi Grimberg <sagi@xxxxxxxxxxx> wrote:
> >>
> >>
> >> On 16/04/2024 6:19, Chaitanya Kulkarni wrote:
> >>> +linux-nvme list for awareness ...
> >>>
> >>> -ck
> >>>
> >>>
> >>> On 4/6/24 17:38, Yi Zhang wrote:
> >>>> Hello
> >>>>
> >>>> I found the kmemleak issue after blktests nvme/tcp tests on the latest
> >>>> linux-block/for-next, please help check it and let me know if you need
> >>>> any info/testing for it, thanks.
> >>> it will help others to specify which testcase you are using ...
> >>>
> >>>> # dmesg | grep kmemleak
> >>>> [ 2580.572467] kmemleak: 92 new suspected memory leaks (see
> >>>> /sys/kernel/debug/kmemleak)
> >>>>
> >>>> # cat kmemleak.log
> >>>> unreferenced object 0xffff8885a1abe740 (size 32):
> >>>>      comm "kworker/40:1H", pid 799, jiffies 4296062986
> >>>>      hex dump (first 32 bytes):
> >>>>        c2 4a 4a 04 00 ea ff ff 00 00 00 00 00 10 00 00  .JJ.............
> >>>>        00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
> >>>>      backtrace (crc 6328eade):
> >>>>        [<ffffffffa7f2657c>] __kmalloc+0x37c/0x480
> >>>>        [<ffffffffa86a9b1f>] sgl_alloc_order+0x7f/0x360
> >>>>        [<ffffffffc261f6c5>] lo_read_simple+0x1d5/0x5b0 [loop]
> >>>>        [<ffffffffc26287ef>] 0xffffffffc26287ef
> >>>>        [<ffffffffc262a2c4>] 0xffffffffc262a2c4
> >>>>        [<ffffffffc262a881>] 0xffffffffc262a881
> >>>>        [<ffffffffa76adf3c>] process_one_work+0x89c/0x19f0
> >>>>        [<ffffffffa76b0813>] worker_thread+0x583/0xd20
> >>>>        [<ffffffffa76ce2a3>] kthread+0x2f3/0x3e0
> >>>>        [<ffffffffa74a804d>] ret_from_fork+0x2d/0x70
> >>>>        [<ffffffffa7406e4a>] ret_from_fork_asm+0x1a/0x30
> >>>> unreferenced object 0xffff88a8b03647c0 (size 16):
> >>>>      comm "kworker/40:1H", pid 799, jiffies 4296062986
> >>>>      hex dump (first 16 bytes):
> >>>>        c0 4a 4a 04 00 ea ff ff 00 10 00 00 00 00 00 00  .JJ.............
> >>>>      backtrace (crc 860ce62b):
> >>>>        [<ffffffffa7f2657c>] __kmalloc+0x37c/0x480
> >>>>        [<ffffffffc261f805>] lo_read_simple+0x315/0x5b0 [loop]
> >>>>        [<ffffffffc26287ef>] 0xffffffffc26287ef
> >>>>        [<ffffffffc262a2c4>] 0xffffffffc262a2c4
> >>>>        [<ffffffffc262a881>] 0xffffffffc262a881
> >>>>        [<ffffffffa76adf3c>] process_one_work+0x89c/0x19f0
> >>>>        [<ffffffffa76b0813>] worker_thread+0x583/0xd20
> >>>>        [<ffffffffa76ce2a3>] kthread+0x2f3/0x3e0
> >>>>        [<ffffffffa74a804d>] ret_from_fork+0x2d/0x70
> >>>>        [<ffffffffa7406e4a>] ret_from_fork_asm+0x1a/0x30
> >> kmemleak suggest that the leakage is coming from lo_read_simple() Is
> >> this a regression that can be bisected?
> >>
> > It's not one regression issue, I tried 6.7 and it also can be reproduced.
>
> Its strange that the stack makes it look like lo_read_simple is allocating
> the sgl, it is probably nvmet-tcp though.
>
> Can you try with the patch below:

Hi Sagi

After re-compiled the kernel on another server and can find more symbols now[1],
With your patch, the below kmemleak issue cannot be reproduced now.

[1]
unreferenced object 0xffff8881b59d0400 (size 32):
  comm "kworker/38:1H", pid 751, jiffies 4297135127
  hex dump (first 32 bytes):
    02 7a d6 06 00 ea ff ff 00 00 00 00 00 10 00 00  .z..............
    00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
  backtrace (crc 5be147ba):
    [<00000000dbe27af4>] __kmalloc+0x41d/0x630
    [<0000000077d4a469>] sgl_alloc_order+0xa9/0x380
    [<00000000f683c92e>] nvmet_tcp_map_data+0x1b9/0x560 [nvmet_tcp]
    [<00000000527a09e7>] nvmet_tcp_try_recv_pdu+0x9d0/0x26e0 [nvmet_tcp]
    [<00000000521ae8ec>] nvmet_tcp_io_work+0x14e/0x30f0 [nvmet_tcp]
    [<00000000110c56b5>] process_one_work+0x85d/0x13f0
    [<00000000a740ddcf>] worker_thread+0x6da/0x1130
    [<00000000b5cf1cf3>] kthread+0x2ed/0x3c0
    [<0000000034819000>] ret_from_fork+0x31/0x70
    [<0000000003276465>] ret_from_fork_asm+0x1a/0x30
unreferenced object 0xffff8881b59d18e0 (size 16):
  comm "kworker/38:1H", pid 751, jiffies 4297135127
  hex dump (first 16 bytes):
    00 7a d6 06 00 ea ff ff 00 10 00 00 00 00 00 00  .z..............
  backtrace (crc 9740ab1c):
    [<00000000dbe27af4>] __kmalloc+0x41d/0x630
    [<000000000b49411d>] nvmet_tcp_map_data+0x2f6/0x560 [nvmet_tcp]
    [<00000000527a09e7>] nvmet_tcp_try_recv_pdu+0x9d0/0x26e0 [nvmet_tcp]
    [<00000000521ae8ec>] nvmet_tcp_io_work+0x14e/0x30f0 [nvmet_tcp]
    [<00000000110c56b5>] process_one_work+0x85d/0x13f0
    [<00000000a740ddcf>] worker_thread+0x6da/0x1130
    [<00000000b5cf1cf3>] kthread+0x2ed/0x3c0
    [<0000000034819000>] ret_from_fork+0x31/0x70
    [<0000000003276465>] ret_from_fork_asm+0x1a/0x30

(gdb) l *(nvmet_tcp_map_data+0x2f6)
0x9c6 is in nvmet_tcp_try_recv_pdu (drivers/nvme/target/tcp.c:432).
427 if (!cmd->req.sg)
428 return NVME_SC_INTERNAL;
429 cmd->cur_sg = cmd->req.sg;
430
431 if (nvmet_tcp_has_data_in(cmd)) {
432 cmd->iov = kmalloc_array(cmd->req.sg_cnt,
433 sizeof(*cmd->iov), GFP_KERNEL);
434 if (!cmd->iov)
435 goto err;
436 }
(gdb) l *(nvmet_tcp_map_data+0x1b9)
0x889 is in nvmet_tcp_try_recv_pdu (drivers/nvme/target/tcp.c:848).
843 }
844
845 static void nvmet_prepare_receive_pdu(struct nvmet_tcp_queue *queue)
846 {
847 queue->offset = 0;
848 queue->left = sizeof(struct nvme_tcp_hdr);
849 queue->cmd = NULL;
850 queue->rcv_state = NVMET_TCP_RECV_PDU;
851 }
852

> --
> diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c
> index a5422e2c979a..bfd1cf7cc1c2 100644
> --- a/drivers/nvme/target/tcp.c
> +++ b/drivers/nvme/target/tcp.c
> @@ -348,6 +348,7 @@ static int nvmet_tcp_check_ddgst(struct
> nvmet_tcp_queue *queue, void *pdu)
>          return 0;
>   }
>
> +/* safe to call multiple times */
>   static void nvmet_tcp_free_cmd_buffers(struct nvmet_tcp_cmd *cmd)
>   {
>          kfree(cmd->iov);
> @@ -1581,13 +1582,9 @@ static void
> nvmet_tcp_free_cmd_data_in_buffers(struct nvmet_tcp_queue *queue)
>          struct nvmet_tcp_cmd *cmd = queue->cmds;
>          int i;
>
> -       for (i = 0; i < queue->nr_cmds; i++, cmd++) {
> -               if (nvmet_tcp_need_data_in(cmd))
> -                       nvmet_tcp_free_cmd_buffers(cmd);
> -       }
> -
> -       if (!queue->nr_cmds && nvmet_tcp_need_data_in(&queue->connect))
> -               nvmet_tcp_free_cmd_buffers(&queue->connect);
> +       for (i = 0; i < queue->nr_cmds; i++, cmd++)
> +               nvmet_tcp_free_cmd_buffers(cmd);
> +       nvmet_tcp_free_cmd_buffers(&queue->connect);
>   }
>
>   static void nvmet_tcp_release_queue_work(struct work_struct *w)
> --
>


-- 
Best Regards,
  Yi Zhang






[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux