Re: Issue with removing directories after disabling quotas

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On Monday 16 March 2015 05:04 PM, JF Le Fillâtre wrote:
Hello Vijay,

Mmmm sorry, I jumped the gun and deleted by hand all the files present
on the bricks, so I can't see if there's any link anywhere...

It's not the first time I have seen this. Very early in my tests I had
another case of files still present on the bricks but not visible in
Gluster, which I solved the same way. At that point I chalked it up to
my limited knowledge of GlusterFS and I assumed that I had misconfigured it.

What do you suspect that it may be, and what do I have to look for if it
ever happens again?
We wanted to check file attributes and xattrs on the bricks to see why some files where not deleted. Is this problem easily re-creatable? If yes, can you please provide the test-case?

Thanks,
Vijay


Thanks!
JF



On 16/03/15 12:25, Vijaikumar M wrote:
Hi JF,

This may not be a quota issue. Can you please check if there are any
linkto files exists on the brick?
On node 'stor104/106', can we get the below output

#find /zfs/brick0/brick | xargs ls -l
#find /zfs/brick1/brick | xargs ls -l
#find /zfs/brick2/brick | xargs ls -l

#find /zfs/brick0/brick | xargs getfattr -d -m . -e hex
#find /zfs/brick1/brick | xargs getfattr -d -m . -e hex
#find /zfs/brick2/brick | xargs getfattr -d -m . -e hex


Thanks,
Vijay



On Monday 16 March 2015 04:18 PM, JF Le Fillâtre wrote:
Forgot to mention: the non-empty directories list files like this:

-?????????? ? ?    ?     ?            ? Hal8723APhyReg.h
-?????????? ? ?    ?     ?            ? Hal8723UHWImg_CE.h
-?????????? ? ?    ?     ?            ? hal_com.h
-?????????? ? ?    ?     ?            ? HalDMOutSrc8723A.h
-?????????? ? ?    ?     ?            ? HalHWImg8723A_BB.h
-?????????? ? ?    ?     ?            ? HalHWImg8723A_FW.h
-?????????? ? ?    ?     ?            ? HalHWImg8723A_MAC.h
-?????????? ? ?    ?     ?            ? HalHWImg8723A_RF.h
-?????????? ? ?    ?     ?            ? hal_intf.h
-?????????? ? ?    ?     ?            ? HalPwrSeqCmd.h
-?????????? ? ?    ?     ?            ? ieee80211.h
-?????????? ? ?    ?     ?            ? odm_debug.h

Thanks,
JF


On 16/03/15 11:45, JF Le Fillâtre wrote:
Hello all,

So, another day another issue. I was trying to play with quotas on my
volume:

================================================================================

[root@stor104 ~]# gluster volume status
Status of volume: live
Gluster process                        Port    Online    Pid
------------------------------------------------------------------------------

Brick stor104:/zfs/brick0/brick                49167    Y    13446
Brick stor104:/zfs/brick1/brick                49168    Y    13457
Brick stor104:/zfs/brick2/brick                49169    Y    13468
Brick stor106:/zfs/brick0/brick                49159    Y    14158
Brick stor106:/zfs/brick1/brick                49160    Y    14169
Brick stor106:/zfs/brick2/brick                49161    Y    14180
NFS Server on localhost                    2049    Y    13483
Quota Daemon on localhost                N/A    Y    13490
NFS Server on stor106                    2049    Y    14195
Quota Daemon on stor106                    N/A    Y    14202
   Task Status of Volume live
------------------------------------------------------------------------------

Task                 : Rebalance
ID                   : 6bd03709-1f48-49a9-a215-d0a6e6f3ab1e
Status               : completed
================================================================================



Not sure if the "Quota Daemon on localhost -> N/A" is normal, but
that's another topic.

While the quotas were enabled, I did some test copying a whole tree
of small files (the Linux kernel sources) to the volume to see which
performance I would get, and it was really low. So I decided to
disable quotas again:


================================================================================

[root@stor104 ~]# gluster volume status
Status of volume: live
Gluster process                        Port    Online    Pid
------------------------------------------------------------------------------

Brick stor104:/zfs/brick0/brick                49167    Y    13754
Brick stor104:/zfs/brick1/brick                49168    Y    13765
Brick stor104:/zfs/brick2/brick                49169    Y    13776
Brick stor106:/zfs/brick0/brick                49159    Y    14282
Brick stor106:/zfs/brick1/brick                49160    Y    14293
Brick stor106:/zfs/brick2/brick                49161    Y    14304
NFS Server on localhost                    2049    Y    13790
NFS Server on stor106                    2049    Y    14319
   Task Status of Volume live
------------------------------------------------------------------------------

Task                 : Rebalance
ID                   : 6bd03709-1f48-49a9-a215-d0a6e6f3ab1e
Status               : completed
================================================================================



I remounted the volume from the client and tried deleting the
directory containing the sources, which gave me a very long list of
this:


================================================================================

rm: cannot remove
‘/glusterfs/live/linux-3.18.7/tools/testing/selftests/ftrace/test.d/kprobe’:
Directory not empty
rm: cannot remove
‘/glusterfs/live/linux-3.18.7/tools/testing/selftests/ptrace’:
Directory not empty
rm: cannot remove
‘/glusterfs/live/linux-3.18.7/tools/testing/selftests/rcutorture/configs/rcu/v0.0’:
Directory not empty
rm: cannot remove
‘/glusterfs/live/linux-3.18.7/tools/testing/selftests/rcutorture/configs/rcu/v3.5’:
Directory not empty
rm: cannot remove
‘/glusterfs/live/linux-3.18.7/tools/testing/selftests/powerpc’:
Directory not empty
rm: cannot remove
‘/glusterfs/live/linux-3.18.7/tools/perf/scripts/python/Perf-Trace-Util’:
Directory not empty
rm: cannot remove
‘/glusterfs/live/linux-3.18.7/tools/perf/Documentation’: Directory
not empty
rm: cannot remove ‘/glusterfs/live/linux-3.18.7/tools/perf/ui/tui’:
Directory not empty
rm: cannot remove
‘/glusterfs/live/linux-3.18.7/tools/perf/util/include’: Directory not
empty
rm: cannot remove ‘/glusterfs/live/linux-3.18.7/tools/lib’: Directory
not empty
rm: cannot remove ‘/glusterfs/live/linux-3.18.7/tools/virtio’:
Directory not empty
rm: cannot remove ‘/glusterfs/live/linux-3.18.7/virt/kvm’: Directory
not empty
================================================================================



I did my homework on Google, yet the information I found was that the
cause for this is that the contents of the bricks have been modified
locally. It's definitely not the case here, I have *not* touched the
contents of the bricks.

So my question is: is it possible that disabling the quotas may have
had some side effects on the metadata of Gluster? If so, what can I
do to force Gluster to rescan all local directories and "import"
local files?

GlusterFS version: 3.6.2

The setup of my volume:


================================================================================

[root@stor104 ~]# gluster volume info
   Volume Name: live
Type: Distribute
Volume ID:
Status: Started
Number of Bricks: 6
Transport-type: tcp
Bricks:
Brick1: stor104:/zfs/brick0/brick
Brick2: stor104:/zfs/brick1/brick
Brick3: stor104:/zfs/brick2/brick
Brick4: stor106:/zfs/brick0/brick
Brick5: stor106:/zfs/brick1/brick
Brick6: stor106:/zfs/brick2/brick
Options Reconfigured:
features.quota: off
performance.readdir-ahead: on
nfs.volume-access: read-only
cluster.data-self-heal-algorithm: full
performance.strict-write-ordering: off
performance.strict-o-direct: off
performance.force-readdirp: off
performance.write-behind-window-size: 4MB
performance.io-thread-count: 32
performance.flush-behind: on
performance.client-io-threads: on
performance.cache-size: 32GB
performance.cache-refresh-timeout: 60
performance.cache-max-file-size: 4MB
nfs.disable: off
cluster.eager-lock: on
cluster.min-free-disk: 1%
server.allow-insecure: on
diagnostics.client-log-level: ERROR
diagnostics.brick-log-level: ERROR
================================================================================



It is mounted from the client with those fstab options:

================================================================================

stor104:live
defaults,backupvolfile-server=stor106,direct-io-mode=disable,noauto
================================================================================


Attached are the log files from stor104

Thanks a lot for any help!
JF




_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users


_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users





[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux