Hi,
Gluster version is 3.5.3-1.
/var/log/gluster.log (client log) gives
during the rm -rf the following logs:
[2015-02-23 14:42:50.180091] W
[client-rpc-fops.c:696:client3_3_rmdir_cbk] 0-md1-client-2:
remote operation failed: Directory not empty
[2015-02-23 14:42:50.180134] W
[client-rpc-fops.c:696:client3_3_rmdir_cbk] 0-md1-client-3:
remote operation failed: Directory not empty
[2015-02-23 14:42:50.180740] W
[client-rpc-fops.c:322:client3_3_mkdir_cbk] 0-md1-client-5:
remote operation failed: File exists. Path:
/linux/suse/12.1/KDE4.7.4/i586
[2015-02-23 14:42:50.180772] W
[client-rpc-fops.c:322:client3_3_mkdir_cbk] 0-md1-client-4:
remote operation failed: File exists. Path:
/linux/suse/12.1/KDE4.7.4/i586
[2015-02-23 14:42:50.181129] W
[client-rpc-fops.c:322:client3_3_mkdir_cbk] 0-md1-client-3:
remote operation failed: File exists. Path:
/linux/suse/12.1/KDE4.7.4/i586
[2015-02-23 14:42:50.181160] W
[client-rpc-fops.c:322:client3_3_mkdir_cbk] 0-md1-client-2:
remote operation failed: File exists. Path:
/linux/suse/12.1/KDE4.7.4/i586
[2015-02-23 14:42:50.319213] W
[client-rpc-fops.c:696:client3_3_rmdir_cbk] 0-md1-client-3:
remote operation failed: Directory not empty
[2015-02-23 14:42:50.319762] W
[client-rpc-fops.c:696:client3_3_rmdir_cbk] 0-md1-client-2:
remote operation failed: Directory not empty
[2015-02-23 14:42:50.320501] W
[client-rpc-fops.c:322:client3_3_mkdir_cbk] 0-md1-client-0:
remote operation failed: File exists. Path:
/linux/suse/12.1/src-oss/suse/src
[2015-02-23 14:42:50.320552] W
[client-rpc-fops.c:322:client3_3_mkdir_cbk] 0-md1-client-1:
remote operation failed: File exists. Path:
/linux/suse/12.1/src-oss/suse/src
[2015-02-23 14:42:50.320842] W
[client-rpc-fops.c:322:client3_3_mkdir_cbk] 0-md1-client-2:
remote operation failed: File exists. Path:
/linux/suse/12.1/src-oss/suse/src
[2015-02-23 14:42:50.320884] W
[client-rpc-fops.c:322:client3_3_mkdir_cbk] 0-md1-client-3:
remote operation failed: File exists. Path:
/linux/suse/12.1/src-oss/suse/src
[2015-02-23 14:42:50.438982] W
[client-rpc-fops.c:696:client3_3_rmdir_cbk] 0-md1-client-3:
remote operation failed: Directory not empty
[2015-02-23 14:42:50.439347] W
[client-rpc-fops.c:696:client3_3_rmdir_cbk] 0-md1-client-2:
remote operation failed: Directory not empty
[2015-02-23 14:42:50.440235] W
[client-rpc-fops.c:322:client3_3_mkdir_cbk] 0-md1-client-0:
remote operation failed: File exists. Path:
/linux/suse/12.1/oss/suse/noarch
[2015-02-23 14:42:50.440344] W
[client-rpc-fops.c:322:client3_3_mkdir_cbk] 0-md1-client-1:
remote operation failed: File exists. Path:
/linux/suse/12.1/oss/suse/noarch
[2015-02-23 14:42:50.440603] W
[client-rpc-fops.c:322:client3_3_mkdir_cbk] 0-md1-client-2:
remote operation failed: File exists. Path:
/linux/suse/12.1/oss/suse/noarch
[2015-02-23 14:42:50.440665] W
[client-rpc-fops.c:322:client3_3_mkdir_cbk] 0-md1-client-3:
remote operation failed: File exists. Path:
/linux/suse/12.1/oss/suse/noarch
[2015-02-23 14:42:50.680827] W
[client-rpc-fops.c:696:client3_3_rmdir_cbk] 0-md1-client-2:
remote operation failed: Directory not empty
[2015-02-23 14:42:50.681721] W
[client-rpc-fops.c:696:client3_3_rmdir_cbk] 0-md1-client-3:
remote operation failed: Directory not empty
[2015-02-23 14:42:50.682482] W
[client-rpc-fops.c:322:client3_3_mkdir_cbk] 0-md1-client-3:
remote operation failed: File exists. Path:
/linux/suse/12.1/oss/suse/i586
[2015-02-23 14:42:50.682517] W
[client-rpc-fops.c:322:client3_3_mkdir_cbk] 0-md1-client-2:
remote operation failed: File exists. Path:
/linux/suse/12.1/oss/suse/i586
Thanks,
A.
On Monday 23 February 2015 20:06:17
Ravishankar N wrote:
On 02/23/2015 07:04 PM, Alessandro Ipe wrote:
Hi Ravi,
gluster volume status md1 returns
Status of volume: md1
Gluster process Port Online Pid
------------------------------------------------------------------------------
Brick tsunami1:/data/glusterfs/md1/brick1
49157 Y 2260
Brick tsunami2:/data/glusterfs/md1/brick1
49152 Y 2320
Brick tsunami3:/data/glusterfs/md1/brick1
49156 Y 20715
Brick tsunami4:/data/glusterfs/md1/brick1
49156 Y 10544
Brick tsunami5:/data/glusterfs/md1/brick1
49152 Y 12588
Brick tsunami6:/data/glusterfs/md1/brick1
49152 Y 12242
Self-heal Daemon on localhost N/A Y 2336
Self-heal Daemon on tsunami2 N/A Y 2359
Self-heal Daemon on tsunami5 N/A Y 27619
Self-heal Daemon on tsunami4 N/A Y 12318
Self-heal Daemon on tsunami3 N/A Y 19118
Self-heal Daemon on tsunami6 N/A Y 27650
Task Status of Volume md1
------------------------------------------------------------------------------
Task : Rebalance
ID : 9dfee1a2-49ac-4766-bdb6-00de5e5883f6
Status : completed
so it seems that all brick server are up.
gluster volume heal md1 info returns
Brick
tsunami1.oma.be:/data/glusterfs/md1/brick1/
Number of entries: 0
Brick
tsunami2.oma.be:/data/glusterfs/md1/brick1/
Number of entries: 0
Brick
tsunami3.oma.be:/data/glusterfs/md1/brick1/
Number of entries: 0
Brick
tsunami4.oma.be:/data/glusterfs/md1/brick1/
Number of entries: 0
Brick
tsunami5.oma.be:/data/glusterfs/md1/brick1/
Number of entries: 0
Brick
tsunami6.oma.be:/data/glusterfs/md1/brick1/
Number of entries: 0
Should I run "gluster volume heal md1 full" ?
Hi Alessandro,
Looks like there is no pending-self heals, so no need to run the
heal command. Can you share the output of the client (mount) log
when you get the ENOTEMPTY during the rm -rf?
What version of gluster are you using?
Thanks,
Ravi
Thanks,
A.
On Monday 23 February 2015 18:12:43
Ravishankar N wrote:
On 02/23/2015 05:42 PM, Alessandro Ipe wrote:
Hi,
We have a "md1" volume under gluster 3.5.3
over 6 servers configured as distributed and replicated. When
trying on a client, thourgh fuse mount (which turns out to be
also a brick server) to delete (as root) recursively a directory
with "rm -rf /home/.md1/linux/suse/12.1", I get the error
messages
rm: cannot remove
‘/home/.md1/linux/suse/12.1/KDE4.7.4/i586’: Directory not empty
rm: cannot remove
‘/home/.md1/linux/suse/12.1/src-oss/suse/src’: Directory not
empty
rm: cannot remove
‘/home/.md1/linux/suse/12.1/oss/suse/noarch’: Directory not
empty
rm: cannot remove
‘/home/.md1/linux/suse/12.1/oss/suse/i586’: Directory not empty
(the same occurs as unprivileged user but
with "Permission denied".)
while a "ls -Ral /home/.md1/linux/suse/12.1"
gives me
/home/.md1/linux/suse/12.1:
total 0
drwxrwxrwx 5 gerb users 151 Feb 20 16:22 .
drwxr-xr-x 6 gerb users 245 Feb 23 12:55 ..
drwxrwxrwx 3 gerb users 95 Feb 23 13:03
KDE4.7.4
drwxrwxrwx 3 gerb users 311 Feb 20 16:57 oss
drwxrwxrwx 3 gerb users 86 Feb 20 16:20
src-oss
/home/.md1/linux/suse/12.1/KDE4.7.4:
total 28
drwxrwxrwx 3 gerb users 95 Feb 23 13:03 .
drwxrwxrwx 5 gerb users 151 Feb 20 16:22 ..
d--------- 2 root root 61452 Feb 23 13:03
i586
/home/.md1/linux/suse/12.1/KDE4.7.4/i586:
total 28
d--------- 2 root root 61452 Feb 23 13:03 .
drwxrwxrwx 3 gerb users 95 Feb 23 13:03 ..
/home/.md1/linux/suse/12.1/oss:
total 0
drwxrwxrwx 3 gerb users 311 Feb 20 16:57 .
drwxrwxrwx 5 gerb users 151 Feb 20 16:22 ..
drwxrwxrwx 4 gerb users 90 Feb 23 13:03 suse
/home/.md1/linux/suse/12.1/oss/suse:
total 536
drwxrwxrwx 4 gerb users 90 Feb 23 13:03 .
drwxrwxrwx 3 gerb users 311 Feb 20 16:57 ..
d--------- 2 root root 368652 Feb 23 13:03
i586
d--------- 2 root root 196620 Feb 23 13:03
noarch
/home/.md1/linux/suse/12.1/oss/suse/i586:
total 360
d--------- 2 root root 368652 Feb 23 13:03 .
drwxrwxrwx 4 gerb users 90 Feb 23 13:03 ..
/home/.md1/linux/suse/12.1/oss/suse/noarch:
total 176
d--------- 2 root root 196620 Feb 23 13:03 .
drwxrwxrwx 4 gerb users 90 Feb 23 13:03 ..
/home/.md1/linux/suse/12.1/src-oss:
total 0
drwxrwxrwx 3 gerb users 86 Feb 20 16:20 .
drwxrwxrwx 5 gerb users 151 Feb 20 16:22 ..
drwxrwxrwx 3 gerb users 48 Feb 23 13:03 suse
/home/.md1/linux/suse/12.1/src-oss/suse:
total 220
drwxrwxrwx 3 gerb users 48 Feb 23 13:03 .
drwxrwxrwx 3 gerb users 86 Feb 20 16:20 ..
d--------- 2 root root 225292 Feb 23 13:03
src
/home/.md1/linux/suse/12.1/src-oss/suse/src:
total 220
d--------- 2 root root 225292 Feb 23 13:03 .
drwxrwxrwx 3 gerb users 48 Feb 23 13:03 ..
Is there a cure such as manually forcing a
healing on that directory ?
Are all bricks up? Are there any pending self-heals ? Does
`gluster volume heal md1` info show any output? If it does, run
'gluster volume heal md1' to manually trigger heal.
-Ravi
Many thanks,
Alessandro.
gluster volume info md1 outputs:
Volume Name: md1
Type: Distributed-Replicate
Volume ID:
6da4b915-1def-4df4-a41c-2f3300ebf16b
Status: Started
Number of Bricks: 3 x 2 = 6
Transport-type: tcp
Bricks:
Brick1: tsunami1:/data/glusterfs/md1/brick1
Brick2: tsunami2:/data/glusterfs/md1/brick1
Brick3: tsunami3:/data/glusterfs/md1/brick1
Brick4: tsunami4:/data/glusterfs/md1/brick1
Brick5: tsunami5:/data/glusterfs/md1/brick1
Brick6: tsunami6:/data/glusterfs/md1/brick1
Options Reconfigured:
performance.write-behind: on
performance.write-behind-window-size: 4MB
performance.flush-behind: off
performance.io-thread-count: 64
performance.cache-size: 512MB
nfs.disable: on
features.quota: off
cluster.read-hash-mode: 2
server.allow-insecure: on
cluster.lookup-unhashed: off
_______________________________________________Gluster-users
mailing listGluster-users@xxxxxxxxxxxhttp://www.gluster.org/mailman/listinfo/gluster-users
--
Dr. Ir. Alessandro Ipe
Department of Observations Tel. +32 2 373 06
31
Remote Sensing from Space Fax. +32 2 374 67
88
Royal Meteorological Institute
Avenue Circulaire 3 Email:
B-1180 Brussels Belgium
Alessandro.Ipe@xxxxxxxx
Web: http://gerb.oma.be