Hi Mohan, I believe there is a mapping for getting a lvm from just its uuid. lvm_lv_from_uuid () might help? With regards, Shishir ----- Original Message ----- From: "M. Mohan Kumar" <mohan@xxxxxxxxxx> To: "Shishir Gowda" <sgowda@xxxxxxxxxx> Cc: gluster-devel@xxxxxxxxxx Sent: Wednesday, July 11, 2012 7:03:50 PM Subject: Re: [RFC] Block Device Xlator Design On Wed, 11 Jul 2012 05:45:56 -0400 (EDT), Shishir Gowda <sgowda@xxxxxxxxxx> wrote: > Hi Mohan, > > For the persistent gfid related issue discussed, could we look into mapping the lv_uuid and lg_uuid to act as their gfid's for glusterfs? > Assuming this to be unique across the cluster, we could guarantee persistence of these attributes, rather than generating them on the fly. > Yes, we can use LV/VG UUID for gfid. > If the above scenario can be utilized, there are further issues to be looked into. > 1. Can these uuid's be set through any interface? I think its not possible, these uuids are generated by lvm library. Do we really need to set UUIDs? > 2. Can just a uuid be sufficient for us to map it to a path? (to support nameless lookup's) I am not sure about this question, should I look into some function? > With regards, > Shishir > > > ----- Original Message ----- > From: "M. Mohan Kumar" <mohan@xxxxxxxxxx> > To: gluster-devel@xxxxxxxxxx > Sent: Wednesday, July 4, 2012 9:57:24 PM > Subject: [RFC] Block Device Xlator Design > > > Hello, > > I posted GlusterFS server xlator patches to enable exporting Block > Devices (currently only Logical Volumes) as regular files at the > client side couple of weeks ago. Here is the link for the patches: > http://review.gluster.com/3551 > > I would to like to discuss about the design of this xlator. > > Current code uses lvm2-devel library to find out list of logical volumes > for the given volume group (in BD xlator each volume file exports on > volume group, in future we may extend this to export multiple volume > groups if needed). init routine of BD xlator constructs internal data > structure holding list of all logical volumes in the VG. > > When open request comes corresponding open interface in BD xlator opens > the intended LV by using this logic: /dev/<vg-name>/<lv-name>. This path > is actually a symbolic link to /dev/dm-<x>. Is my assumption about > having this /dev/<vg-name>/<lv-name> is it right? Will it always work? > > Also if there is a request to create a file (in turn it has to create a > LV at the server side), lvm2 api is used to create a logical volume in > the given VG but with a pre-determined size ie one logical extent size > because create interface does not take size as one of the parameters but > size is one of the parameters to create a logical volume. > > In a typical VM disk image scenario qemu-img first creates a file and > then uses truncate command to set the required file size. So this should > not be an issue with this kind of usage. > > But there are other issues in the BD xlator code as of now. lvm2 api > does not support resizing a LV, creating snapshot of LV. But there are > tools available to do the same. So BD xlator code forks and executes the > required binary to achieve the functionality. i.e when truncate is > called on a BD xlator volume, it will result in running lvresize binary > with required parameters. I checked with lvm2-devel mailing list about > their plan to support lv resizing and creating snapshots & waiting for > the responses. > > Is it okay to rely on external binaries to create a snapshot of a LV and > resize it? > > Also when a LV is created out-of-band for example, using gluster cli to > create a LV (I am working on the gluster cli patches to create LV and > copy/snapshot LVs), BD xlator will not be aware of these changes and I > am looking if 'notify' feature of xlator can be used to notify the BD > xlator to create a LV, snapshot instead of doing it from gluster > management xlators. I have sent a mail to gluster-devel asking some more > information about this. > > Regards, > M. Mohan Kumar. > > > _______________________________________________ > Gluster-devel mailing list > Gluster-devel@xxxxxxxxxx > https://lists.nongnu.org/mailman/listinfo/gluster-devel >