On 11/25/2014 05:59 AM, Derick Turner
wrote:
Gluster version is standard Ubuntu 14.04 LTS repo
version -
glusterfs 3.4.2 built on Jan 14 2014 18:05:37
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2011 Gluster Inc.
<http://www.gluster.com>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the
GNU General Public License.
The gluster volume heal <volume> info command produces a lot
of output. There are a number of <gfid:hashnumber> entries
for both nodes and a few directories in the list as well.
Checking the directories on both nodes and the files appear to be
the same on each so I resolved those issues. There are, however,
still a large number of gfid files listed from the gluster volume
heal eukleia info command. There are also a large number of gfid
files and one file listed from the gluster volume heal eukleia
info split-brain and one file. This file no longer exists on
either of the bricks or the mounted filesystems.
Is there any way to clear these down or resolve this?
Could you check how many files are reported for the following
command's output?
This command needs to be executed on the brick inside .glusterfs:
find /your/brick/directory/.glusterfs -links 1 -type f
All such files need to be deleted/renamed to some other place I
guess.
Pranith
Thanks
Derick
On 24/11/14 05:32, Pranith Kumar Karampuri wrote:
On 11/21/2014 05:33 AM, Derick Turner wrote:
I have a new set up which has been
running for a few weeks. Due to a configuration issue the
self heal wasnt working properly and I ended up with the
system in a bit of a state. Ive been chasing down issues and
it should (fingers crossed) be back and stable again. One
issue wich seems to be re-occurring is that on one of the
client bricks I get a load of gfids don't exist anywhere
else. The inodes of these files only point to the gfid file
and it appears that they keep coming back.....
Volume is set up as such
root@vader:/gluster/eukleiahome/intertrust/moodledata# gluster
volume info eukleiaweb
Volume Name: eukleiaweb
Type: Replicate
Volume ID: d8a29f07-7f3e-46a3-9ec4-4281038267ce
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: lando:/gluster/eukleiahome
Brick2: vader:/gluster/eukleiahome
and the file systems are mounting via NFS.
In the logs of the host for Brick one I get the following
(e.g.)
[2014-11-20 23:53:55.910705] W
[client-rpc-fops.c:471:client3_3_open_cbk]
0-eukleiaweb-client-1: remote operation failed: No such file
or directory. Path:
<gfid:e5d25375-ecb8-47d2-833f-0586b659f98a>
(00000000-0000-0000-0000-000000000000)
[2014-11-20 23:53:55.910721] E
[afr-self-heal-data.c:1270:afr_sh_data_open_cbk]
0-eukleiaweb-replicate-0: open of
<gfid:e5d25375-ecb8-47d2-833f-0586b659f98a> failed on
child eukleiaweb-client-1 (No such file or directory)
[2014-11-20 23:53:55.921425] W
[client-rpc-fops.c:1538:client3_3_inodelk_cbk]
0-eukleiaweb-client-1: remote operation failed: No such file
or directory
when I check this gfid out it exists on Brick 1 but not on
Brick 2 (which I am assuming is due to the error above).
Additionally when I check for the file that this GFID
references it doesn't go anywhere. I.e. -
Which version of gluster are you using? Could you check if there
are any directories that need to be healed, using "gluster
volume heal <volname> info?
Pranith
root@lando:/gluster/eukleiahome# find . -samefile
.glusterfs/e5/d2/e5d25375-ecb8-47d2-833f-0586b659f98a
./.glusterfs/e5/d2/e5d25375-ecb8-47d2-833f-0586b659f98a
root@lando:/gluster/eukleiahome# file
.glusterfs/e5/d2/e5d25375-ecb8-47d2-833f-0586b659f98a
.glusterfs/e5/d2/e5d25375-ecb8-47d2-833f-0586b659f98a: JPEG
image data, EXIF standard
I have tried removing these files using rm
.glusterfs/e5/d2/e5d25375-ecb8-47d2-833f-0586b659f98a but
eitherall of the occurrences haven't been logged in
/var/log/glusterfs/glusterfsd.log (as I am clearing out all
that I can find) or they are re-appearing.
Firstly, is this something to worry about? Secondly, should I
be able to simply get rid of them (and I'm being mistaken
about them re-appearing) and if so, is simply removing them
the best method?
Thanks
Derick
|
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://supercolony.gluster.org/mailman/listinfo/gluster-users