Re: Nameless lookup in meta namespace

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 08/12/2016 01:31 PM, Niels de Vos wrote:
> On Fri, Aug 12, 2016 at 09:54:30AM +0200, Niels de Vos wrote:
>> On Fri, Aug 12, 2016 at 11:29:34AM +0530, Mohammed Rafi K C wrote:
>>> Hi,
>>>
>>> As you may probably know meta xlators provide a /proc kind of virtual
>>> name space that can be used to get meta data information for mount
>>> process. We are trying to enhance the meta xlator to support more
>>> features and to support other protocols.
>>>
>>> Currently meta generate gfid by its own and store it to the inode. But
>>> when a graph switch happens fuse send a nameless lookup with a fresh
>>> inode from the new graph to resolve the gfid. But meta doesn't know
>>> about the gfid even though it generated it, because all the information
>>> are stored in inode_ctx of previous inode. So the nameless lookup will fail.
>>>
>>> Basically we need a way to resolve gfid from meta xlators. Or otherwise
>>> as meta xlators provide meta data of process, we can restrict the access
>>> to per graph basis. If a graph change happens , we can treat it as the
>>> directory as deleted and recreated with different gfid. We have to
>>> access it again to get information from new graph.
>>>
>>> Thoughts?
>> How about taking the path+filename in the meta-xlator and generate the
>> GFID based on that? meta should not need to manage hardlinks, so there
>> wont be multuple filenames for the same GFID.
> Well, of course this is not complete... (/me blames the lack of coffee)
>
> There need to be a way to go from GFID to inode, even if the inode was
> never looked-up in the meta xlator. The filehandle that an NFS-client
> receives from an NFS-server should stay valid on server reboot, or
> fail-over to an other NFS-server.
I haven't thought about server reboot and fail-over, it is a good catch.


>
> Without pushing the gfid to the bricks, I'm not sure how else it is
> possible. Maybe meta should create the files on the bricks, but only as
> empty files, and handle the read/write by itself without winding along.
We can think about the solution, I will brainstorm it .

Again many thanks for the suggestion.


Rafi KC
>
> Niels

_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-devel



[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux