On Mon, May 1, 2017 at 11:20 PM, Gandalf Corvotempesta <gandalf.corvotempesta@xxxxxxxxx> wrote:
I am sorry at the moment with the given information I am not able to wrap my head around the solution you are trying to suggest :-(.2017-05-01 19:36 GMT+02:00 Pranith Kumar Karampuri <pkarampu@xxxxxxxxxx>:
> To know GFID of file1 you must know where the file resides so that you can
> do getxattr trusted.gfid on the file. So storing server/brick location on
> gfid is not getting us much more information that what we already have.
It was an example. You can use the same xattr solution based on a hash.
A full-path for each volume is unique (obviously, you can't have two
"/tmp/my/file" on the same volume), thus
hashing that to something like SHA1("/tmp/my/file") will give you a
unique name (50b73d9c5dfda264d3878860ed7b12 95e104e8ae)
You can use that unique file-name (stored somewhere like
".metedata/50b73d9c5dfda264d3878860ed7b12 95e104e8ae" to store the
xattr with proper file locations across the cluster.
Filename can be renamed and then we lost the link because hash will be different. Anyways all these kinds of problems are already solved in distribute layer.
As long as you sync the ".metadata" directory across the trusted pool
(or across all member regarding the affected volume),
you should be able to get proper file location by looking for the xattr.
This is just a very basic and stupid POC, i'm just trying to explain
my reasoning.
At the moment, brick-splitting, inversion of afr/dht has some merit in my mind, with tilt towards any solution that avoids this inversion and still get the desired benefits.
--
Pranith
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://lists.gluster.org/mailman/listinfo/gluster-users