rm -rf some_dir results in "Directory not empty"

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

 

 

We have a "md1" volume under gluster 3.5.3 over 6 servers configured as distributed and replicated. When trying on a client, thourgh fuse mount (which turns out to be also a brick server) to delete (as root) recursively a directory with "rm -rf /home/.md1/linux/suse/12.1", I get the error messages

 

rm: cannot remove ‘/home/.md1/linux/suse/12.1/KDE4.7.4/i586’: Directory not empty

rm: cannot remove ‘/home/.md1/linux/suse/12.1/src-oss/suse/src’: Directory not empty

rm: cannot remove ‘/home/.md1/linux/suse/12.1/oss/suse/noarch’: Directory not empty

rm: cannot remove ‘/home/.md1/linux/suse/12.1/oss/suse/i586’: Directory not empty

(the same occurs as unprivileged user but with "Permission denied".)

 

while a "ls -Ral /home/.md1/linux/suse/12.1" gives me

/home/.md1/linux/suse/12.1:

total 0

drwxrwxrwx 5 gerb users 151 Feb 20 16:22 .

drwxr-xr-x 6 gerb users 245 Feb 23 12:55 ..

drwxrwxrwx 3 gerb users 95 Feb 23 13:03 KDE4.7.4

drwxrwxrwx 3 gerb users 311 Feb 20 16:57 oss

drwxrwxrwx 3 gerb users 86 Feb 20 16:20 src-oss

 

/home/.md1/linux/suse/12.1/KDE4.7.4:

total 28

drwxrwxrwx 3 gerb users 95 Feb 23 13:03 .

drwxrwxrwx 5 gerb users 151 Feb 20 16:22 ..

d--------- 2 root root 61452 Feb 23 13:03 i586

 

/home/.md1/linux/suse/12.1/KDE4.7.4/i586:

total 28

d--------- 2 root root 61452 Feb 23 13:03 .

drwxrwxrwx 3 gerb users 95 Feb 23 13:03 ..

 

/home/.md1/linux/suse/12.1/oss:

total 0

drwxrwxrwx 3 gerb users 311 Feb 20 16:57 .

drwxrwxrwx 5 gerb users 151 Feb 20 16:22 ..

drwxrwxrwx 4 gerb users 90 Feb 23 13:03 suse

 

/home/.md1/linux/suse/12.1/oss/suse:

total 536

drwxrwxrwx 4 gerb users 90 Feb 23 13:03 .

drwxrwxrwx 3 gerb users 311 Feb 20 16:57 ..

d--------- 2 root root 368652 Feb 23 13:03 i586

d--------- 2 root root 196620 Feb 23 13:03 noarch

 

/home/.md1/linux/suse/12.1/oss/suse/i586:

total 360

d--------- 2 root root 368652 Feb 23 13:03 .

drwxrwxrwx 4 gerb users 90 Feb 23 13:03 ..

 

/home/.md1/linux/suse/12.1/oss/suse/noarch:

total 176

d--------- 2 root root 196620 Feb 23 13:03 .

drwxrwxrwx 4 gerb users 90 Feb 23 13:03 ..

 

/home/.md1/linux/suse/12.1/src-oss:

total 0

drwxrwxrwx 3 gerb users 86 Feb 20 16:20 .

drwxrwxrwx 5 gerb users 151 Feb 20 16:22 ..

drwxrwxrwx 3 gerb users 48 Feb 23 13:03 suse

 

/home/.md1/linux/suse/12.1/src-oss/suse:

total 220

drwxrwxrwx 3 gerb users 48 Feb 23 13:03 .

drwxrwxrwx 3 gerb users 86 Feb 20 16:20 ..

d--------- 2 root root 225292 Feb 23 13:03 src

 

/home/.md1/linux/suse/12.1/src-oss/suse/src:

total 220

d--------- 2 root root 225292 Feb 23 13:03 .

drwxrwxrwx 3 gerb users 48 Feb 23 13:03 ..

 

 

Is there a cure such as manually forcing a healing on that directory ?

 

 

Many thanks,

 

 

Alessandro.

 

 

gluster volume info md1 outputs:

Volume Name: md1

Type: Distributed-Replicate

Volume ID: 6da4b915-1def-4df4-a41c-2f3300ebf16b

Status: Started

Number of Bricks: 3 x 2 = 6

Transport-type: tcp

Bricks:

Brick1: tsunami1:/data/glusterfs/md1/brick1

Brick2: tsunami2:/data/glusterfs/md1/brick1

Brick3: tsunami3:/data/glusterfs/md1/brick1

Brick4: tsunami4:/data/glusterfs/md1/brick1

Brick5: tsunami5:/data/glusterfs/md1/brick1

Brick6: tsunami6:/data/glusterfs/md1/brick1

Options Reconfigured:

performance.write-behind: on

performance.write-behind-window-size: 4MB

performance.flush-behind: off

performance.io-thread-count: 64

performance.cache-size: 512MB

nfs.disable: on

features.quota: off

cluster.read-hash-mode: 2

server.allow-insecure: on

cluster.lookup-unhashed: off

 

_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux