Re: Hundreds of duplicate files

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks Joe, I've read your blog post as well as your post regarding the .glusterfs directory.
 
I found some unneeded duplicate files which were not being read properly. I then deleted the link file from the brick. This always removes the duplicate file from the listing, but the file does not always become readable. If I also delete the associated file in the .glusterfs directory on that brick, then some more files become readable. However this solution still doesn't work for all files.
 
I know the file on the brick is not corrupt as it can be read directly from the brick directory.
 
Hopefully you have some other ideas I could try.
 
Otherwise, I may proceed by writing a script to handle the files one-by-one. Move the actual file from .glusterfs off the brick to a temporary location, remove all references to the file on the bricks, then copy the file back onto the mounted volume.. It's not ideal but hopefully this is a one-time occurence.
 
Tom
 
--------- Original Message ---------
Subject: Re: Hundreds of duplicate files
From: "Joe Julian" <joe@xxxxxxxxxxxxxxxx>
Date: 12/27/14 3:28 pm
To: tbenzvi@xxxxxxxxxxxxxxx, gluster-users@xxxxxxxxxxx

The linkfile you showed earlier was perfect.

Check this article on my blog for the details on how dht works and how to calculate hashes: http://joejulian.name/blog/dht-misses-are-expensive/

On December 27, 2014 3:18:00 PM PST, tbenzvi@xxxxxxxxxxxxxxx wrote:
That didn't fix it unfortunately. In fact, I've done a full rebalance after initially discovering the problem and after updating Gluster, but nothing was changed..
 
I don't know too much about how Gluster works internally; is it possible to compute the hash for each duplicate filename - figure out on which brick it belong is and find where it actually resides, then recreate the link file or update the linkto attribute? Assuming broken link files are the problem..
 
--------- Original Message ---------
Subject: Re: Hundreds of duplicate files
From: "Joe Julian" <joe@xxxxxxxxxxxxxxxx>
Date: 12/27/14 1:55 pm
To: tbenzvi@xxxxxxxxxxxxxxx, gluster-users@xxxxxxxxxxx

I'm wondering if this is from a corrupted failed rebalance. In a directory that has duplicates, do "setfattr -n trusted.distribute.fix.layout -v 1 ."

If that fixes it, do a rebalance...fix-layout

On December 27, 2014 12:38:01 PM PST, tbenzvi@xxxxxxxxxxxxxxx wrote:
Ok, I am really tearing my hair out here. I tried doing this manually for several other files just to be sure. And in these cases it removed the duplicate file from the directory listing, but the file can still not be read.. Reading directly from the brick works fine.
 
--------- Original Message ---------
Subject: Re: Hundreds of duplicate files
From: "Joe Julian" <joe@xxxxxxxxxxxxxxxx>
Date: 12/27/14 12:01 pm
To: gluster-users@xxxxxxxxxxx

Should be safe.

Here's what I've done in the past to clean up rogue dht link files (not that yours looked rogue though):

find $brick_root -type f -size 0 -perm 1000 -exec /bin/rm {} \;
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux