On Wed, Feb 06, 2019 at 11:53:54AM -0800, Salman Qazi wrote: > This patch solves the issue by removing synchronize_rcu from mq_put_mnt. > This is done by implementing an asynchronous version of kern_unmount. > > Since mntput() sleeps, it needs to be deferred to a work queue. > > Additionally, the callers of mq_put_mnt appear to be safe having > it behave asynchronously. In particular, put_ipc_ns calls > mq_clear_sbinfo which renders the inode inaccessible for the purposes of > mqueue_create by making s_fs_info NULL. This appears > to be the thing that prevents access while free_ipc_ns is taking place. > So, the unmount should be able to proceed lazily. Ugh... I really doubt that it's correct. The caller is mq_put_mnt(ns); free_ipc_ns(ns); and we have static void mqueue_evict_inode(struct inode *inode) { ... ipc_ns = get_ns_from_inode(inode); with static struct ipc_namespace *get_ns_from_inode(struct inode *inode) { struct ipc_namespace *ns; spin_lock(&mq_lock); ns = __get_ns_from_inode(inode); spin_unlock(&mq_lock); return ns; } and static inline struct ipc_namespace *__get_ns_from_inode(struct inode *inode) { return get_ipc_ns(inode->i_sb->s_fs_info); } with ->s_fs_info being the ipc_namespace we are freeing after mq_put_ns() Are you saying that get_ipc_ns() after free_ipc_ns() is safe? Because ->evict_inode() *IS* called on umount. What happens to your patch if there was a regular file left on that filesystem? Smells like a memory corruptor...