Re: BUG: ceph_inode_cachep and ceph_dentry_cachep caches are not clean when destroying

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, 2020-02-19 at 19:29 +0800, Xiubo Li wrote:
> On 2020/2/19 19:27, Ilya Dryomov wrote:
> > On Wed, Feb 19, 2020 at 12:01 PM Xiubo Li <xiubli@xxxxxxxxxx> wrote:
> > > On 2020/2/19 18:53, Ilya Dryomov wrote:
> > > > On Wed, Feb 19, 2020 at 10:39 AM Xiubo Li <xiubli@xxxxxxxxxx> wrote:
> > > > > Hi Jeff, Ilya and all
> > > > > 
> > > > > I hit this call traces by running some test cases when unmounting the fs
> > > > > mount points.
> > > > > 
> > > > > It seems there still have some inodes or dentries are not destroyed.
> > > > > 
> > > > > Will this be a problem ? Any idea ?
> > > > Hi Xiubo,
> > > > 
> > > > Of course it is a problem ;)
> > > > 
> > > > These are all in ceph_inode_info and ceph_dentry_info caches, but
> > > > I see traces of rbd mappings as well.  Could you please share your
> > > > test cases?  How are you unloading modules?
> > > I am not sure exactly in which one, mostly I was running the following
> > > commands.
> > > 
> > > 1, ./bin/rbd map share -o mount_timeout=30
> > > 
> > > 2, ./bin/rbd unmap share
> > > 
> > > 3, ./bin/mount.ceph :/ /mnt/cephfs/
> > > 
> > > 4, `for i in {0..1000}; do mkdir /mnt/cephfs/dir$0; done` and `for i in
> > > {0..1000}; do rm -rf /mnt/cephfs/dir$0; done`
> > > 
> > > 5, umount /mnt/cephfs/
> > > 
> > > 6, rmmod ceph; rmmod rbd; rmmod libceph
> > > 
> > > This it seems none business with the rbd mappings.
> > Is this on more or less plain upstream or with async unlink and
> > possibly other filesystem patches applied?
> 
> Using the latest test branch: 
> https://github.com/ceph/ceph-client/tree/testing.
> 
> thanks
> 

I've run a lot of tests like this and haven't see this at all. Did you
see any "Busy inodes after umount" messages in dmesg?

I note that your kernel is tainted -- sometimes if you're plugging in
modules that have subtle ABI incompatibilities, you can end up with
memory corruption like this.

What would be ideal would be to come up with a reliable reproducer if
possible.
-- 
Jeff Layton <jlayton@xxxxxxxxxx>




[Index of Archives]     [CEPH Users]     [Ceph Large]     [Ceph Dev]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux