Re: review request - Change the way client uuid is built

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



+Poornima

----- Original Message -----
> From: "Niels de Vos" <ndevos@xxxxxxxxxx>
> To: "Raghavendra Gowdappa" <rgowdapp@xxxxxxxxxx>
> Cc: "Gluster Devel" <gluster-devel@xxxxxxxxxxx>
> Sent: Wednesday, September 21, 2016 1:22:39 PM
> Subject: Re:  review request - Change the way client uuid is built
> 
> On Wed, Sep 21, 2016 at 01:47:34AM -0400, Raghavendra Gowdappa wrote:
> > Hi all,
> > 
> > [1] might have implications across different components in the stack. Your
> > reviews are requested.
> > 
> > <commit-msg>
> > 
> > rpc : Change the way client uuid is built
> > 
> > Problem:
> > Today the main users of client uuid are protocol layers, locks, leases.
> > Protocolo layers requires each client uuid to be unique, even across
> > connects and disconnects. Locks and leases on the server side also use
> > the same client uid which changes across graph switches and across
> > file migrations. Which makes the graph switch and file migration
> > tedious for locks and leases.
> > As of today lock migration across graph switch is client driven,
> > i.e. when a graph switches, the client reassociates all the locks(which
> > were associated with the old graph client uid) with the new graphs
> > client uid. This means flood of fops to get and set locks for each fd.
> > Also file migration across bricks becomes even more difficult as
> > client uuid for the same client, is different on the other brick.
> > 
> > The exact set of issues exists for leases as well.
> > 
> > Hence the solution:
> > Make the migration of locks and leases during graph switch and migration,
> > server driven instead of client driven. This can be achieved by changing
> > the format of client uuid.
> > 
> > Client uuid currently:
> > %s(ctx uuid)-%s(protocol client name)-%d(graph id)%s(setvolume
> > count/reconnect count)
> > 
> > Proposed Client uuid:
> > "CTX_ID:%s-GRAPH_ID:%d-PID:%d-HOST:%s-PC_NAME:%s-RECON_NO:%s"
> > -  CTX_ID: This is will be constant per client.
> > -  GRAPH_ID, PID, HOST, PC_NAME(protocol client name), RECON_NO(setvolume
> > count)
> > remains the same.
> > 
> > With this, the first part of the client uuid, CTX_ID+GRAPH_ID remains
> > constant across file migration, thus the migration is made easier.
> > 
> > Locks and leases store only the first part CTX_ID+GRAPH_ID as their
> > client identification. This means, when the new graph connects,
> > the locks and leases xlator should walk through their database
> > to update the client id, to have new GRAPH_ID. Thus the graph switch
> > is made server driven and saves a lot of network traffic.
> 
> What is the plan to have the CTX_ID+GRAPH_ID shared over multiple gfapi
> applications? This would be important for NFS-Ganesha failover where one
> NFS-Ganesha process is stopped, and the NFS-Clients (by virtual-ip) move
> to an other NFS-Ganesha server.
> 
> Will there be a way to set CTX_ID(+GRAPH_ID?) through libgfapi? That
> would allow us to add a configuration option to NFS-Ganesha and have the
> whole NFS-Ganesha cluster use the same locking/leases.
> 
> Thanks,
> Niels
> 
> 
> > 
> > Change-Id: Ia81d57a9693207cd325d7b26aee4593fcbd6482c
> > BUG: 1369028
> > Signed-off-by: Poornima G <pgurusid@xxxxxxxxxx>
> > Signed-off-by: Susant Palai <spalai@xxxxxxxxxx>
> > 
> > </commit-msg>
> > 
> > [1] http://review.gluster.org/#/c/13901/10/
> > 
> > regards,
> > Raghavendra
> > _______________________________________________
> > Gluster-devel mailing list
> > Gluster-devel@xxxxxxxxxxx
> > http://www.gluster.org/mailman/listinfo/gluster-devel
> 
_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-devel



[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux