2024년 5월 9일 (목) 오전 6:26, Steve French <smfrench@xxxxxxxxx>님이 작성: > > I saw an example in 6.9-rc6 where ksmbd was failing to send an open > response repeatedly (it showed up running test cifs/102 Where is cifs/102 test ? I can not find it in xfstests. > > On the client I see the call one stuck SMB3.1.1 open request (never > returns) and call stack of: > > [root@fedora29 ~]# cat /proc/5042/stack > [<0>] wait_for_response+0xd1/0x130 [cifs] > [<0>] compound_send_recv+0x68e/0x10b0 [cifs] > [<0>] cifs_send_recv+0x23/0x30 [cifs] > [<0>] SMB2_open+0x378/0xbd0 [cifs] > [<0>] smb2_open_file+0x171/0x560 [cifs] > [<0>] cifs_do_create.isra.0+0x471/0xd40 [cifs] > [<0>] cifs_atomic_open+0x382/0x780 [cifs] > [<0>] lookup_open.isra.0+0x6b0/0x930 > [<0>] path_openat+0x491/0x10d0 > [<0>] do_filp_open+0x144/0x250 > [<0>] do_sys_openat2+0xe0/0x110 > [<0>] __x64_sys_openat+0xc1/0x120 > [<0>] do_syscall_64+0x78/0x180 > [<0>] entry_SYSCALL_64_after_hwframe+0x76/0x7e > > on the server I see lots of failure to send a message on the socket > every five seconds: > ... > [Wed May 8 21:13:59 2024] ksmbd: Failed to send message: -11 > [Wed May 8 21:14:04 2024] ksmbd: Failed to send message: -11 > [Wed May 8 21:14:09 2024] ksmbd: Failed to send message: -11 > [Wed May 8 21:14:14 2024] ksmbd: Failed to send message: -11 > [Wed May 8 21:14:20 2024] ksmbd: Failed to send message: -11 > [Wed May 8 21:14:25 2024] ksmbd: Failed to send message: -11 > [Wed May 8 21:14:30 2024] ksmbd: Failed to send message: -11 > [Wed May 8 21:14:35 2024] ksmbd: Failed to send message: -11 > [Wed May 8 21:14:40 2024] ksmbd: Failed to send message: -11 > [Wed May 8 21:14:45 2024] ksmbd: Failed to send message: -11 > > doing "ksmbd.control -s" did free that request but hangs on the server > (and subsequent requests on the client fail). Nothing obvious in > dmesg on client or server, except the following (xfs bug?) which was > logged after the "ksmbd.control -s" on the server: > > [24324.546128] Call Trace: > [24324.546130] <TASK> > [24324.546134] dump_stack_lvl+0x76/0xa0 > [24324.546141] dump_stack+0x10/0x20 > [24324.546145] xfs_error_report+0x4a/0x70 [xfs] > [24324.546298] ? xfs_remove+0x175/0x300 [xfs] > [24324.546440] xfs_trans_cancel+0x14b/0x170 [xfs] > [24324.546582] xfs_remove+0x175/0x300 [xfs] > [24324.546724] xfs_vn_unlink+0x53/0xb0 [xfs] > [24324.546866] vfs_unlink+0x146/0x2e0 > [24324.546872] ksmbd_vfs_unlink+0xa9/0x140 [ksmbd] > [24324.546888] ? __pfx_session_fd_check+0x10/0x10 [ksmbd] > [24324.546902] __ksmbd_close_fd+0x2ba/0x2d0 [ksmbd] > [24324.546916] ? _raw_spin_unlock+0xe/0x40 > [24324.546920] ? __pfx_session_fd_check+0x10/0x10 [ksmbd] > [24324.546936] __close_file_table_ids+0x60/0xb0 [ksmbd] > [24324.546950] ksmbd_destroy_file_table+0x22/0x60 [ksmbd] > [24324.546966] ksmbd_session_destroy+0x5a/0x1b0 [ksmbd] > [24324.546984] ksmbd_sessions_deregister+0x24c/0x270 [ksmbd] > [24324.547001] ksmbd_server_terminate_conn+0x12/0x30 [ksmbd] > [24324.547016] ksmbd_conn_handler_loop+0x203/0x370 [ksmbd] > [24324.547034] ? __pfx_ksmbd_conn_handler_loop+0x10/0x10 [ksmbd] > [24324.547050] kthread+0xe4/0x110 > [24324.547054] ? __pfx_kthread+0x10/0x10 > [24324.547058] ret_from_fork+0x47/0x70 > [24324.547062] ? __pfx_kthread+0x10/0x10 > [24324.547065] ret_from_fork_asm+0x1a/0x30 > [24324.547070] </TASK> > > > Any ideas? > -- > Thanks, > > Steve >