On 05/15/2015 07:30 PM, Avra Sengupta wrote: > Hi, > > A shared storage meta-volume is currently being used by > snapshot-scheduler, geo-replication, and nfs-ganesha. In order to > simplify the creation and set-up of the same, we are introducing a > global volume set option(cluster.enable-shared-storage). > > On enabling this option, the system analyzes the number of peers in the > cluster, which are currently connected, and chooses three such > peers(including the node the command is issued from). From these peers a > volume(gluster_shared_storage) is created. Depending on the number of > peers available the volume is either a replica 3 volume(if there are 3 > connected peers), a replica 2 volume(if there are 2 connected peers), or > a single brick volume(if there is only one node in the cluster). > "/var/run/gluster/ss_brick" serves as the brick path on each node for > the shared storage volume. We also mount the shared storage at > "/var/run/gluster/shared_storage" on all the nodes in the cluster as > part of enabling this option. > > Once the volume is created, and mounted the maintainance of the volume > like adding-bricks, removing bricks etc., is expected to be the onus of > the user. > > On disabling the option, we provide the user a warning, and on > affirmation from the user we stop the shared storage volume, and unmount > it from all the nodes in the cluster. > > These are achieved with hook-scripts as part of the volume set option. > If anyone is interested in having a look at the patch, it's available > for review at http://review.gluster.org/#/c/10793/ . If there is any > feedback and suggestion regarding the same, please feel free to share it. Didn't expect this patch coming so fast :) I will take a look at it in some days. ~Atin > > Regards, > Avra > > _______________________________________________ > Gluster-devel mailing list > Gluster-devel@xxxxxxxxxxx > http://www.gluster.org/mailman/listinfo/gluster-devel -- ~Atin _______________________________________________ Gluster-devel mailing list Gluster-devel@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-devel