Raghavendra,
This error is occurring in a shell script moving files between directories on a FUSE mount when overwriting an old file with a newer file (it's a backup script, moving an incremental backup of a file into a 'rolling full backup' directory).
As a temporary workaround, we parse the output of this shell script for move errors and handle the errors as they happen. Simply re-moving the files fails, so we stat the destination (to see if we can learn anything about the type of file that causes this behavior), delete the destination, and try the move again (success!). Typical output is as follows:
/bin/mv: cannot move `./homegfs/hpc_shared/motorsports/gmics/Raven/p11/149/data_collected4' to `../bkp00/./homegfs/hpc_shared/motorsports/gmics/Raven/p11/149/data_collected4': File exists
/bin/mv: cannot move `./homegfs/hpc_shared/motorsports/gmics/Raven/p11/149/data_collected4' to `../bkp00/./homegfs/hpc_shared/motorsports/gmics/Raven/p11/149/data_collected4': File exists
File: `../bkp00/./homegfs/hpc_shared/motorsports/gmics/Raven/p11/149/data_collected4'
Size: 1714 Blocks: 4 IO Block: 131072 regular file
Device: 13h/19d Inode: 11051758947722304158 Links: 1
Access: (0660/-rw-rw----) Uid: ( 628/pkeistler) Gid: ( 2020/ gmirl)
Access: 2016-01-20 17:20:45.000000000 -0500
Modify: 2015-11-06 15:20:41.000000000 -0500
Change: 2016-01-27 03:35:00.434712146 -0500
retry: renaming ./homegfs/hpc_shared/motorsports/gmics/Raven/p11/149/data_collected4 -> ../bkp00/./homegfs/hpc_shared/motorsports/gmics/Raven/p11/149/data_collected4
Not sure if that description rings any bells as to what the problem might be, but if not, I added some code to print out the 'getattr' for the source and destination file on all of the bricks (before we delete the destination) and will post to this thread the next time we have that issue.
Thanks,
Patrick
On Fri, Apr 29, 2016 at 8:15 AM, Raghavendra G <raghavendra@xxxxxxxxxxx> wrote:
On Wed, Apr 13, 2016 at 10:00 PM, David F. Robinson <david.robinson@xxxxxxxxxxxxx> wrote:I am running into two problems (possibly related?).1) Every once in a while, when I do a 'rm -rf DIRNAME', it comes back with an error:rm: cannot remove `DIRNAME` : Directory not emptyIf I try the 'rm -rf' again after the error, it deletes the directory. The issue is that I have scripts that clean up directories, and they are failing unless I go through the deletes a 2nd time.What kind of mount are you using? Is it a FUSE or NFS mount? Recently we saw a similar issue on NFS clients on RHEL6 where rm -rf used to fail with ENOTEMPTY in some specific cases.2) I have different scripts to move a large numbers of files (5-25k) from one directory to another. Sometimes I receive an error:/bin/mv: cannot move `xyz` to `../bkp00/xyz`: File existsDoes ./bkp00/xyz exist on backend? If yes, what is the value of gfid xattr (key: "trusted.gfid") for "xyz" and "./bkp00/xyz" on backend bricks (I need gfid from all the bricks) when this issue happens?_______________________________________________The move is done using '/bin/mv -f', so it should overwrite the file if it exists. I have tested this with hundreds of files, and it works as expected. However, every few days the script that moves the files will have problems with 1 or 2 files during the move. This is one move problem out of roughly 10,000 files that are being moved and I cannot figure out any reason for the intermittent problem.Setup details for my gluster configuration shown below.[root@gfs01bkp logs]# gluster volume infoVolume Name: gfsbackupType: DistributeVolume ID: e78d5123-d9bc-4d88-9c73-61d28abf0b41Status: StartedNumber of Bricks: 7Transport-type: tcpBricks:Brick1: gfsib01bkp.corvidtec.com:/data/brick01bkp/gfsbackupBrick2: gfsib01bkp.corvidtec.com:/data/brick02bkp/gfsbackupBrick3: gfsib02bkp.corvidtec.com:/data/brick01bkp/gfsbackupBrick4: gfsib02bkp.corvidtec.com:/data/brick02bkp/gfsbackupBrick5: gfsib02bkp.corvidtec.com:/data/brick03bkp/gfsbackupBrick6: gfsib02bkp.corvidtec.com:/data/brick04bkp/gfsbackupBrick7: gfsib02bkp.corvidtec.com:/data/brick05bkp/gfsbackupOptions Reconfigured:nfs.disable: offserver.allow-insecure: onstorage.owner-gid: 100server.manage-gids: oncluster.lookup-optimize: onserver.event-threads: 8client.event-threads: 8changelog.changelog: offstorage.build-pgfid: onperformance.readdir-ahead: ondiagnostics.brick-log-level: WARNINGdiagnostics.client-log-level: WARNINGcluster.rebal-throttle: aggressiveperformance.cache-size: 1024MBperformance.write-behind-window-size: 10MB[root@gfs01bkp logs]# rpm -qa | grep glusterglusterfs-server-3.7.9-1.el6.x86_64glusterfs-debuginfo-3.7.9-1.el6.x86_64glusterfs-api-3.7.9-1.el6.x86_64glusterfs-resource-agents-3.7.9-1.el6.noarchgluster-nagios-common-0.1.1-0.el6.noarchglusterfs-libs-3.7.9-1.el6.x86_64glusterfs-fuse-3.7.9-1.el6.x86_64glusterfs-extra-xlators-3.7.9-1.el6.x86_64glusterfs-geo-replication-3.7.9-1.el6.x86_64glusterfs-3.7.9-1.el6.x86_64glusterfs-cli-3.7.9-1.el6.x86_64glusterfs-devel-3.7.9-1.el6.x86_64glusterfs-rdma-3.7.9-1.el6.x86_64samba-vfs-glusterfs-4.1.11-2.el6.x86_64glusterfs-client-xlators-3.7.9-1.el6.x86_64glusterfs-api-devel-3.7.9-1.el6.x86_64python-gluster-3.7.9-1.el6.noarch
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-devel--Raghavendra G
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users
_______________________________________________ Gluster-devel mailing list Gluster-devel@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-devel