Re: R: nfs cluster, problem with delete file in the failover case

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, May 13, 2015 at 11:38:51AM +0000, Cao, Vinh wrote:
> Sounds like the process that has the file create while you are moving
> it to another node still open.

If I understand correctly, the filesystem is still unmountable.  If a
process held a file on the filesystem open, an unmount attempt would
return -EBUSY.

--b.

> Meaning you are deleting the file and
> doing failover at the same time.  This has not things to do with your
> cluster setup.
> 
> I believed , you can run lsof command on the system that you're seeing
> the disk size is still not clean up. Then grep for deteled arg.  You
> may see the process number that is still there. Then kill that process
> and it will clean up the file handle process that is still open.
> 
> That is how I see in your problem. I don't think it has any things to
> do with OS cluster.

-- 
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster




[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux