On Wed, May 18, 2022 at 2:05 PM Xiubo Li <xiubli@xxxxxxxxxx> wrote: > > > On 5/18/22 7:53 PM, Ilya Dryomov wrote: > > On Tue, May 10, 2022 at 5:27 AM Xiubo Li <xiubli@xxxxxxxxxx> wrote: > >> When unmounting and if there still have some caps or capsnaps flushing > >> still not get the acks it will wait and dump them, but for the capsnaps > >> they didn't initialize the ->ci, so when deferencing the ->ci we will > >> see the kernel crash: > >> > >> kernel: ceph: dump_cap_flushes: still waiting for cap flushes through 45572: > >> kernel: ceph: 5000000008b:fffffffffffffffe Fw 23183 0 0 > >> kernel: ceph: 5000000008a:fffffffffffffffe Fw 23184 0 0 > >> kernel: ceph: 50000000089:fffffffffffffffe Fw 23185 0 0 > >> kernel: ceph: 50000000084:fffffffffffffffe Fw 23189 0 0 > >> kernel: ceph: 5000000007a:fffffffffffffffe Fw 23199 0 0 > >> kernel: ceph: 50000000094:fffffffffffffffe Fw 23374 0 0 > >> kernel: ceph: 50000000092:fffffffffffffffe Fw 23377 0 0 > >> kernel: ceph: 50000000091:fffffffffffffffe Fw 23378 0 0 > >> kernel: ceph: 5000000008e:fffffffffffffffe Fw 23380 0 0 > >> kernel: ceph: 50000000087:fffffffffffffffe Fw 23382 0 0 > >> kernel: ceph: 50000000086:fffffffffffffffe Fw 23383 0 0 > >> kernel: ceph: 50000000083:fffffffffffffffe Fw 23384 0 0 > >> kernel: ceph: 50000000082:fffffffffffffffe Fw 23385 0 0 > >> kernel: ceph: 50000000081:fffffffffffffffe Fw 23386 0 0 > >> kernel: ceph: 50000000080:fffffffffffffffe Fw 23387 0 0 > >> kernel: ceph: 5000000007e:fffffffffffffffe Fw 23389 0 0 > >> kernel: ceph: 5000000007b:fffffffffffffffe Fw 23392 0 0 > >> kernel: BUG: kernel NULL pointer dereference, address: 0000000000000780 > >> kernel: #PF: supervisor read access in kernel mode > >> kernel: #PF: error_code(0x0000) - not-present page > >> kernel: PGD 0 P4D 0 > >> kernel: Oops: 0000 [#1] PREEMPT SMP PTI > >> kernel: CPU: 3 PID: 46268 Comm: umount Tainted: G S 5.18.0-rc2-ceph-g1771083b2f18 #1 > >> kernel: Hardware name: Supermicro SYS-5018R-WR/X10SRW-F, BIOS 2.0 12/17/2015 > >> kernel: RIP: 0010:ceph_mdsc_sync.cold.64+0x77/0xc3 [ceph] > >> kernel: RSP: 0018:ffffc90009c4fda8 EFLAGS: 00010212 > >> kernel: RAX: 0000000000000000 RBX: ffff8881abf63000 RCX: 0000000000000000 > >> kernel: RDX: 0000000000000000 RSI: ffffffff823932ad RDI: 0000000000000000 > >> kernel: RBP: ffff8881abf634f0 R08: 0000000000005dc7 R09: c0000000ffffdfff > >> kernel: R10: 0000000000000001 R11: ffffc90009c4fbc8 R12: 0000000000000001 > >> kernel: R13: 000000000000b204 R14: ffffffffa0ab3598 R15: ffff88815d36a110 > >> kernel: FS: 00007f50eb25e080(0000) GS:ffff88885fcc0000(0000) knlGS:0000000000000000 > >> kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 > >> kernel: CR2: 0000000000000780 CR3: 0000000116ea2003 CR4: 00000000003706e0 > >> kernel: DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 > >> kernel: DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 > >> kernel: Call Trace: > >> kernel: <TASK> > >> kernel: ? schedstat_stop+0x10/0x10 > >> kernel: ceph_sync_fs+0x2c/0x100 [ceph] > >> kernel: sync_filesystem+0x6d/0x90 > >> kernel: generic_shutdown_super+0x22/0x120 > >> kernel: kill_anon_super+0x14/0x30 > >> kernel: ceph_kill_sb+0x36/0x90 [ceph] > >> kernel: deactivate_locked_super+0x29/0x60 > >> kernel: cleanup_mnt+0xb8/0x140 > >> kernel: task_work_run+0x6d/0xb0 > >> kernel: exit_to_user_mode_prepare+0x226/0x230 > >> kernel: syscall_exit_to_user_mode+0x25/0x60 > >> kernel: do_syscall_64+0x40/0x80 > >> kernel: entry_SYSCALL_64_after_hwframe+0x44/0xae > >> > >> Cc: stable@xxxxxxxxxxxxxxx > >> https://tracker.ceph.com/issues/55332 > >> Signed-off-by: Xiubo Li <xiubli@xxxxxxxxxx> > >> --- > >> fs/ceph/mds_client.c | 5 +++-- > >> fs/ceph/snap.c | 1 + > >> 2 files changed, 4 insertions(+), 2 deletions(-) > >> > >> diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c > >> index 46a13ea9d284..e8c87dea0551 100644 > >> --- a/fs/ceph/mds_client.c > >> +++ b/fs/ceph/mds_client.c > >> @@ -2001,10 +2001,11 @@ static void dump_cap_flushes(struct ceph_mds_client *mdsc, u64 want_tid) > >> list_for_each_entry(cf, &mdsc->cap_flush_list, g_list) { > >> if (cf->tid > want_tid) > >> break; > >> - pr_info("%llx:%llx %s %llu %llu %d\n", > >> + pr_info("%llx:%llx %s %llu %llu %d%s\n", > >> ceph_vinop(&cf->ci->vfs_inode), > >> ceph_cap_string(cf->caps), cf->tid, > >> - cf->ci->i_last_cap_flush_ack, cf->wake); > >> + cf->ci->i_last_cap_flush_ack, cf->wake, > >> + cf->is_capsnap ? " is_capsnap" : ""); > >> } > >> spin_unlock(&mdsc->cap_dirty_lock); > >> } > >> diff --git a/fs/ceph/snap.c b/fs/ceph/snap.c > >> index 322ee5add942..db1433ce666e 100644 > >> --- a/fs/ceph/snap.c > >> +++ b/fs/ceph/snap.c > >> @@ -585,6 +585,7 @@ static void ceph_queue_cap_snap(struct ceph_inode_info *ci, > >> ceph_cap_string(dirty), capsnap->need_flush ? "" : "no_flush"); > >> ihold(inode); > >> > >> + capsnap->cap_flush.ci = ci; > >> capsnap->follows = old_snapc->seq; > >> capsnap->issued = __ceph_caps_issued(ci, NULL); > >> capsnap->dirty = dirty; > >> -- > >> 2.36.0.rc1 > >> > > Hi Xiubo, > > > > dump_cap_flushes() is not upstream. Can this NULL dereference occur > > elsewhere or only when printing cap flushes? In the latter case, this > > should just be folded into "ceph: dump info about cap flushes when > > we're waiting too long for them" in the testing branch. > > Okay, checked this again only occur in the dump_cap_flushes() case. > > Let's fold it to the previous one in testing branch. Done! Ilya