Re: Simplify creation and set-up of meta-volume (shared storage)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



With this option, the volume will be created and explicitly mounted on all the nodes, which are currently a part of the cluster. Please note that new nodes added to the cluster will not have the meta volume mounted explicitly.

So in a case where the console tries to use the volume from a peer in the cluster, which was added after the option was set it will not have the mount available to it. Hence I feel its best that the console continues with the explicit mounting, and showing explicit warning during stop/remove-brick of the meta volume in console and allow the operations.

There is no other impact on the console, as far as this feature is concerned.

Regards,
Avra

On 05/18/2015 09:27 AM, Shubhendu Tripathi wrote:
Avra,

We are planning to provide mounting of meta volume on the nodes of the cluster from Console as part of volume syncing and addition of new nodes to the cluster. Looks like if this option is set, the explicit mounting of the meta volume from console is not required and it would be taken care by gluster.

Currently we show explicit warning during stop/remove-brick of the meta volume in console and allow the operations. I dont feel there would be any impact due to new feature.

Kindly let us know if any other impact on console or we need to take care of anything else as result of this feature.

Thanks and Regards,
Shubhendu

On 05/15/2015 07:30 PM, Avra Sengupta wrote:
Hi,

A shared storage meta-volume is currently being used by snapshot-scheduler, geo-replication, and nfs-ganesha. In order to simplify the creation and set-up of the same, we are introducing a global volume set option(cluster.enable-shared-storage).

On enabling this option, the system analyzes the number of peers in the cluster, which are currently connected, and chooses three such peers(including the node the command is issued from). From these peers a volume(gluster_shared_storage) is created. Depending on the number of peers available the volume is either a replica 3 volume(if there are 3 connected peers), a replica 2 volume(if there are 2 connected peers), or a single brick volume(if there is only one node in the cluster). "/var/run/gluster/ss_brick" serves as the brick path on each node for the shared storage volume. We also mount the shared storage at "/var/run/gluster/shared_storage" on all the nodes in the cluster as part of enabling this option.

Once the volume is created, and mounted the maintainance of the volume like adding-bricks, removing bricks etc., is expected to be the onus of the user.

On disabling the option, we provide the user a warning, and on affirmation from the user we stop the shared storage volume, and unmount it from all the nodes in the cluster.

These are achieved with hook-scripts as part of the volume set option. If anyone is interested in having a look at the patch, it's available for review at http://review.gluster.org/#/c/10793/ . If there is any feedback and suggestion regarding the same, please feel free to share it.

Regards,
Avra

_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-devel


_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-devel




[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux