Re: Mount structures are leaked

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Jun 08, 2017 at 01:49:38PM -0700, Andrei Vagin wrote:
> Hello,
> 
> We found that mount structures are leaked on the upstream linux kernel:
> 
> [root@zdtm criu]# cat /proc/slabinfo | grep mnt
> mnt_cache          36456  36456    384   42    4 : tunables    0    0
>   0 : slabdata    868    868      0
> [root@zdtm criu]#  python test/zdtm.py run  -t zdtm/static/env00
> --iter 10 -f ns
> === Run 1/1 ================ zdtm/static/env00
> 
> ========================= Run zdtm/static/env00 in ns ==========================
> Start test
> ./env00 --pidfile=env00.pid --outfile=env00.out --envname=ENV_00_TEST
> Run criu dump
> Run criu restore
> Run criu dump
> ....
> Run criu restore
> Send the 15 signal to  339
> Wait for zdtm/static/env00(339) to die for 0.100000
> Removing dump/zdtm/static/env00/31
> ========================= Test zdtm/static/env00 PASS ==========================
> [root@zdtm criu]# cat /proc/slabinfo | grep mnt
> mnt_cache          36834  36834    384   42    4 : tunables    0    0
>   0 : slabdata    877    877      0
> 
> [root@zdtm linux]# git describe HEAD
> v4.12-rc4-122-gb29794e
> 
> [root@zdtm ~]# uname -a
> Linux zdtm.openvz.org 4.12.0-rc4+ #2 SMP Thu Jun 8 20:49:01 CEST 2017
> x86_64 x86_64 x86_64 GNU/Linux

For fsck sake...  Andrei, you *do* know better.
	1) I have no idea what setup do you have - e.g. whether you have mount event
propagation set up in a way that ends up with mounts accumulating somewhere.
	2) I have no idea what those scripts are and names don't look descriptive
enough to google for them in hope to find out (nor the version of those scripts,
if there had been more than one)
	3) I have no idea which config do you have.
	4) I have no idea which kernel is that about, other than "rc4 with something
on top of it"
	5) I have no idea how that had behaved on other kernels (or how that was
supposed to behave in the first place)

So it boils down to "we've done something, it has given a result we didn't expect,
the kernel must've been broken".  About the only thing I can suggest at that point is
telnet bofh.jeffballard.us 666
and see if it provides an inspiration...



[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux