On 2/11/2015 10:17 AM, Sagi Grimberg wrote:
Hey Nic, So Our QA guys recently stepped on this bug when performing stress login-logout from a single initiator to 10 targets each exposed over 4 portals, so overall 40 sessions (needless to say we are talking on iser...). So there are lots of logins in parallel with lots of logouts. It seems that the connection termination causes iscsi_tx_thread to access the connection after it is freed or something (list corruption probably coming from iscsit_handle_immediate_queue or iscsit_handle_response_queue, and NULL deref coming from iscsit_take_action_for_connection_exit). Note, isert_wait_conn waits for session commands and QP flush which is normally pretty fast, the conn termination is done in a work that waits for DISCONNECTED event which might take longer (which is why we do it outside wait_conn context to avoid blocking it). I didn't get too far with this until now, do you have any idea on what might have happened?
P.S. Running this same test over iSCSI/TCP does not seem to hit this type of list corruption (yet) but I do get a leak in se_sess_cache when removing the tpgts: kmem_cache_destroy se_sess_cache: Slab cache still has objects CPU: 8 PID: 5807 Comm: rmmod Tainted: G E 3.19.0-rc1+ #32 Hardware name: Supermicro SYS-1027R-WRF/X9DRW, BIOS 3.0a 08/08/2013 ffffffffa03a0620 ffff880425da7eb8 ffffffff8153805c 0000000000000001 ffff880079856e80 ffff880425da7ed8 ffffffff8111b78c 00007fff921047f1 0000000000000001 ffff880425da7ee8 ffffffffa03841dc ffff880425da7f08 Call Trace: [<ffffffff8153805c>] dump_stack+0x48/0x5c [<ffffffff8111b78c>] kmem_cache_destroy+0x7c/0xa0 [<ffffffffa03841dc>] release_se_kmem_caches+0x1c/0x80 [target_core_mod] [<ffffffffa039018d>] target_core_exit_configfs+0x11d/0x122 [target_core_mod] [<ffffffff810c018a>] SyS_delete_module+0x17a/0x1c0 [<ffffffff8112cd87>] ? SyS_munmap+0x27/0x40 [<ffffffff8153bf92>] system_call_fastpath+0x12/0x17 Sagi. -- To unsubscribe from this list: send the line "unsubscribe target-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html