Re: [EXT] Re: qedr memory leak report

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sep 2, 2019, at 3:53 AM, Michal Kalderon <mkalderon@xxxxxxxxxxx> wrote:

>> From: Chuck Lever <chuck.lever@xxxxxxxxxx>
>> Sent: Friday, August 30, 2019 9:28 PM
>> 
>> External Email
>> 
>> ----------------------------------------------------------------------
>> 
>>> On Aug 30, 2019, at 2:03 PM, Chuck Lever <chuck.lever@xxxxxxxxxx>
>> wrote:
>>> 
>>> Hi Michal-
>>> 
>>> In the middle of some other testing, I got this kmemleak report while
>>> testing with FastLinq cards in iWARP mode:
>>> 
>>> unreferenced object 0xffff888458923340 (size 32):
>>> comm "mount.nfs", pid 2294, jiffies 4298338848 (age 1144.337s)  hex
>>> dump (first 32 bytes):
>>>   20 1d 69 63 88 88 ff ff 20 1d 69 63 88 88 ff ff   .ic.... .ic....
>>>   00 60 7a 69 84 88 ff ff 00 60 82 f9 00 00 00 00  .`zi.....`......
>>> backtrace:
>>>   [<000000000df5bfed>] __kmalloc+0x128/0x176
>>>   [<0000000020724641>] qedr_alloc_pbl_tbl.constprop.44+0x3c/0x121
>> [qedr]
>>>   [<00000000a361c591>] init_mr_info.constprop.41+0xaf/0x21f [qedr]
>>>   [<00000000e8049714>] qedr_alloc_mr+0x95/0x2c1 [qedr]
>>>   [<000000000e6102bc>] ib_alloc_mr_user+0x31/0x96 [ib_core]
>>>   [<00000000d254a9fb>] frwr_init_mr+0x23/0x121 [rpcrdma]
>>>   [<00000000a0364e35>] rpcrdma_mrs_create+0x45/0xea [rpcrdma]
>>>   [<00000000fd6bf282>] rpcrdma_buffer_create+0x9e/0x1c9 [rpcrdma]
>>>   [<00000000be3a1eba>] xprt_setup_rdma+0x109/0x279 [rpcrdma]
>>>   [<00000000b736b88f>] xprt_create_transport+0x39/0x19a [sunrpc]
>>>   [<000000001024e4dc>] rpc_create+0x118/0x1ab [sunrpc]
>>>   [<00000000cca43a49>] nfs_create_rpc_client+0xf8/0x15f [nfs]
>>>   [<00000000073c962c>] nfs_init_client+0x1a/0x3b [nfs]
>>>   [<00000000b03964c4>] nfs_init_server+0xc1/0x212 [nfs]
>>>   [<000000001c71f609>] nfs_create_server+0x74/0x1a4 [nfs]
>>>   [<000000004dc919a1>] nfs3_create_server+0xb/0x25 [nfsv3]
>>> 
>>> It's repeated many times.
>>> 
>>> The workload was an unremarkable software build and regression test
>>> suite on an NFSv3 mount with RDMA.
>> 
>> Also seeing one of these per NFS mount:
>> 
>> unreferenced object 0xffff888869f39b40 (size 64):
>>  comm "kworker/u28:0", pid 17569, jiffies 4299267916 (age 1592.907s)
>>  hex dump (first 32 bytes):
>>    00 80 53 6d 88 88 ff ff 00 00 00 00 00 00 00 00  ..Sm............
>>    00 48 e2 66 84 88 ff ff 00 00 00 00 00 00 00 00  .H.f............
>>  backtrace:
>>    [<0000000063e652dd>] kmem_cache_alloc_trace+0xed/0x133
>>    [<0000000083b1e912>] qedr_iw_connect+0xf9/0x3c8 [qedr]
>>    [<00000000553be951>] iw_cm_connect+0xd0/0x157 [iw_cm]
>>    [<00000000b086730c>] rdma_connect+0x54e/0x5b0 [rdma_cm]
>>    [<00000000d8af3cf2>] rpcrdma_ep_connect+0x22b/0x360 [rpcrdma]
>>    [<000000006a413c8d>] xprt_rdma_connect_worker+0x24/0x88 [rpcrdma]
>>    [<000000001c5b049a>] process_one_work+0x196/0x2c6
>>    [<000000007e3403ba>] worker_thread+0x1ad/0x261
>>    [<000000001daaa973>] kthread+0xf4/0xf9
>>    [<0000000014987b31>] ret_from_fork+0x24/0x30
>> 
>> Looks like this one is not being freed:
>> 
>> 514         ep = kzalloc(sizeof(*ep), GFP_KERNEL);
>> 515         if (!ep)
>> 516                 return -ENOMEM;
>> 
>> 
> Thanks Chuck! I'll take care of this. Is there an easy repro for getting the leak ?

Nothing special is necessary. Enable kmemleak detection, then run any NFS/RDMA workload that does some I/O, unmount, and wait a few minutes for the kmemleak laudromat thread to run.





[Index of Archives]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Photo]     [Yosemite News]     [Yosemite Photos]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux