Hi - Are you sure all NFS clients are unmounted ? For example, if you have two nfs mounts (on machine client1 and client2) . If you delete a file at client1 and umount client1. You will see the deleted files (at /proc) until you perform an umount at client2. . .. ... Cheers, Lakshmipathi.G FOSS Programmer. ________________________________________ From: gluster-users-bounces at gluster.org [gluster-users-bounces at gluster.org] on behalf of Tomoaki Sato [tsato at valinux.co.jp] Sent: Monday, August 15, 2011 9:24 PM To: gluster-users at gluster.org Subject: Re: deleted files make bricks full ? Hi, I've reported a trouble with 3.1.5-1 as mentioned below. I can see the same issue with 3.1.6-1 too. Why glusterfsd keep open files deleted by NFS clients? Best, (2011/08/02 8:43), Tomoaki Sato wrote: > a simple way to reproduce the issue: > 1) NFS mount and create 'foo' and umount. > 2) NFS mount and delete 'foo' and umount. > 3) replete 1) 2) till ENOSPC. > > command logs are following: > [root at vhead-010 ~]# rpm -qa | grep gluster > glusterfs-fuse-3.1.5-1 > glusterfs-core-3.1.5-1 > [root at vhead-010 ~]# cat /etc/issue > CentOS release 5.6 (Final) > Kernel \r on an \m > > [root at vhead-010 ~]# for i in small-{1..4}-4-private; do ssh $i df /mnt/brick; do > ne > Filesystem 1K-blocks Used Available Use% Mounted on > /dev/mapper/00002cde:00002cdf:8 > 103212320 192256 103020064 1% /mnt/brick > Filesystem 1K-blocks Used Available Use% Mounted on > /dev/mapper/00002ceb:00002cec:8 > 103212320 192256 103020064 1% /mnt/brick > Filesystem 1K-blocks Used Available Use% Mounted on > /dev/mapper/00002cf8:00002cf9:8 > 103212320 192256 103020064 1% /mnt/brick > Filesystem 1K-blocks Used Available Use% Mounted on > /dev/mapper/00002d05:00002d06:8 > 103212320 192256 103020064 1% /mnt/brick > [root at vhead-010 ~]# mount small:/small /mnt > [root at vhead-010 ~]# ls /mnt > [root at vhead-010 ~]# dd if=/dev/zero of=/mnt/foo bs=1M count=1024 > 1024+0 records in > 1024+0 records out > 1073741824 bytes (1.1 GB) copied, 17.8419 seconds, 60.2 MB/s > [root at vhead-010 ~]# ls -l /mnt/foo > -rw-r--r-- 1 root root 1073741824 Aug 2 08:14 /mnt/foo > [root at vhead-010 ~]# umount /mnt > [root at vhead-010 ~]# for i in small-{1..4}-4-private; do ssh $i df /mnt/brick; do > ne > Filesystem 1K-blocks Used Available Use% Mounted on > /dev/mapper/00002cde:00002cdf:8 > 103212320 1241864 101970456 2% /mnt/brick > Filesystem 1K-blocks Used Available Use% Mounted on > /dev/mapper/00002ceb:00002cec:8 > 103212320 192256 103020064 1% /mnt/brick > Filesystem 1K-blocks Used Available Use% Mounted on > /dev/mapper/00002cf8:00002cf9:8 > 103212320 192256 103020064 1% /mnt/brick > Filesystem 1K-blocks Used Available Use% Mounted on > /dev/mapper/00002d05:00002d06:8 > 103212320 192256 103020064 1% /mnt/brick > [root at vhead-010 ~]# mount small:/small /mnt > [root at vhead-010 ~]# rm -f /mnt/foo > [root at vhead-010 ~]# ls /mnt > [root at vhead-010 ~]# umount /mnt > [root at vhead-010 ~]# for i in small-{1..4}-4-private; do ssh $i df /mnt/brick; do > ne > Filesystem 1K-blocks Used Available Use% Mounted on > /dev/mapper/00002cde:00002cdf:8 > 103212320 1241864 101970456 2% /mnt/brick > Filesystem 1K-blocks Used Available Use% Mounted on > /dev/mapper/00002ceb:00002cec:8 > 103212320 192256 103020064 1% /mnt/brick > Filesystem 1K-blocks Used Available Use% Mounted on > /dev/mapper/00002cf8:00002cf9:8 > 103212320 192256 103020064 1% /mnt/brick > Filesystem 1K-blocks Used Available Use% Mounted on > /dev/mapper/00002d05:00002d06:8 > 103212320 192256 103020064 1% /mnt/brick > [root at vhead-010 ~]# ssh small-1-4-private > [root at localhost ~]# du /mnt/brick > 16 /mnt/brick/lost+found > 24 /mnt/brick > [root at localhost ~]# ps ax | grep glusterfsd | grep -v grep > 7246 ? Ssl 0:03 /opt/glusterfs/3.1.5/sbin/glusterfsd --xlator-option > small-server.listen-port=24009 -s localhost --volfile-id small.small-1-4-private > .mnt-brick -p /etc/glusterd/vols/small/run/small-1-4-private-mnt-brick.pid --bri > ck-name /mnt/brick --brick-port 24009 -l /var/log/glusterfs/bricks/mnt-brick.log > > [root at localhost ~]# ls -l /proc/7246/fd > total 0 > lrwx------ 1 root root 64 Aug 2 08:18 0 -> /dev/null > lrwx------ 1 root root 64 Aug 2 08:18 1 -> /dev/null > lrwx------ 1 root root 64 Aug 2 08:18 10 -> socket:[153304] > lrwx------ 1 root root 64 Aug 2 08:18 11 -> socket:[153306] > lrwx------ 1 root root 64 Aug 2 08:18 12 -> socket:[153388] > lrwx------ 1 root root 64 Aug 2 08:18 13 -> /mnt/brick/foo (deleted) <==== > lrwx------ 1 root root 64 Aug 2 08:18 2 -> /dev/null > lr-x------ 1 root root 64 Aug 2 08:18 3 -> eventpoll:[153252] > l-wx------ 1 root root 64 Aug 2 08:18 4 -> /var/log/glusterfs/bricks/mnt-brick. > log > lrwx------ 1 root root 64 Aug 2 08:18 5 -> /etc/glusterd/vols/small/run/small-1 > -4-private-mnt-brick.pid > lrwx------ 1 root root 64 Aug 2 08:18 6 -> socket:[153257] > lrwx------ 1 root root 64 Aug 2 08:18 7 -> socket:[153301] > lrwx------ 1 root root 64 Aug 2 08:18 8 -> /tmp/tmpfpuXk7N (deleted) > lrwx------ 1 root root 64 Aug 2 08:18 9 -> socket:[153297] > [root at localhost ~]# exit > [root at vhead-010 ~]# mount small:/small /mnt > [root at vhead-010 ~]# dd if=/dev/zero of=/mnt/foo bs=1M count=1024 > 1024+0 records in > 1024+0 records out > 1073741824 bytes (1.1 GB) copied, 21.4717 seconds, 50.0 MB/s > [root at vhead-010 ~]# ls -l /mnt/foo > -rw-r--r-- 1 root root 1073741824 Aug 2 08:19 /mnt/foo > [root at vhead-010 ~]# umount /mnt > [root at vhead-010 ~]# for i in small-{1..4}-4-private; do ssh $i df /mnt/brick; do > ne > Filesystem 1K-blocks Used Available Use% Mounted on > /dev/mapper/00002cde:00002cdf:8 > 103212320 2291472 100920848 3% /mnt/brick > Filesystem 1K-blocks Used Available Use% Mounted on > /dev/mapper/00002ceb:00002cec:8 > 103212320 192256 103020064 1% /mnt/brick > Filesystem 1K-blocks Used Available Use% Mounted on > /dev/mapper/00002cf8:00002cf9:8 > 103212320 192256 103020064 1% /mnt/brick > Filesystem 1K-blocks Used Available Use% Mounted on > /dev/mapper/00002d05:00002d06:8 > 103212320 192256 103020064 1% /mnt/brick > [root at vhead-010 ~]# mount small:/small /mnt > [root at vhead-010 ~]# rm -f /mnt/foo > [root at vhead-010 ~]# ls /mnt > [root at vhead-010 ~]# umount /mnt > [root at vhead-010 ~]# for i in small-{1..4}-4-private; do ssh $i df /mnt/brick; do > ne > Filesystem 1K-blocks Used Available Use% Mounted on > /dev/mapper/00002cde:00002cdf:8 > 103212320 2291472 100920848 3% /mnt/brick > Filesystem 1K-blocks Used Available Use% Mounted on > /dev/mapper/00002ceb:00002cec:8 > 103212320 192256 103020064 1% /mnt/brick > Filesystem 1K-blocks Used Available Use% Mounted on > /dev/mapper/00002cf8:00002cf9:8 > 103212320 192256 103020064 1% /mnt/brick > Filesystem 1K-blocks Used Available Use% Mounted on > /dev/mapper/00002d05:00002d06:8 > 103212320 192256 103020064 1% /mnt/brick > [root at vhead-010 ~]# ssh small-1-4-private ls -l /proc/7246/fd > total 0 > lrwx------ 1 root root 64 Aug 2 08:18 0 -> /dev/null > lrwx------ 1 root root 64 Aug 2 08:18 1 -> /dev/null > lrwx------ 1 root root 64 Aug 2 08:18 10 -> socket:[153304] > lrwx------ 1 root root 64 Aug 2 08:18 11 -> socket:[153306] > lrwx------ 1 root root 64 Aug 2 08:18 12 -> socket:[153388] > lrwx------ 1 root root 64 Aug 2 08:18 13 -> /mnt/brick/foo (deleted) <==== > lrwx------ 1 root root 64 Aug 2 08:21 14 -> /mnt/brick/foo (deleted) <==== > lrwx------ 1 root root 64 Aug 2 08:18 2 -> /dev/null > lr-x------ 1 root root 64 Aug 2 08:18 3 -> eventpoll:[153252] > l-wx------ 1 root root 64 Aug 2 08:18 4 -> /var/log/glusterfs/bricks/mnt-brick. > log > lrwx------ 1 root root 64 Aug 2 08:18 5 -> /etc/glusterd/vols/small/run/small-1 > -4-private-mnt-brick.pid > lrwx------ 1 root root 64 Aug 2 08:18 6 -> socket:[153257] > lrwx------ 1 root root 64 Aug 2 08:18 7 -> socket:[153301] > lrwx------ 1 root root 64 Aug 2 08:18 8 -> /tmp/tmpfpuXk7N (deleted) > lrwx------ 1 root root 64 Aug 2 08:18 9 -> socket:[153297] > > Tomo Sato > > (2011/08/02 7:14), Tomoaki Sato wrote: >> Hi, >> >> My simple test program, which repeat create-write-read-delete 64 of 1GB >> files on 100GB x 4 bricks from 4 NFS clients, fails due to ENOSPC. >> I found some glusterfsd have many file descriptors of same name, deleted >> files and these files fill the bricks. >> Is this know issue? >> >> Best, >> >> Tomo Sato >> >> >> >> >> _______________________________________________ >> Gluster-users mailing list >> Gluster-users at gluster.org >> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://gluster.org/cgi-bin/mailman/listinfo/gluster-users _______________________________________________ Gluster-users mailing list Gluster-users at gluster.org http://gluster.org/cgi-bin/mailman/listinfo/gluster-users