My proposal is for gdeploy to communicate with Heketi, glusterd, and
the system itself to service requests from the administrator.
Communicate with Heketi for all volume allocation/deallocation, with
glusterd for any modifications on the volume, and with the node
operating system (if really necessary) for any required setup. The following is just a brainstorm, not a spec file by any means. Just an idea of what the workflow could be like. Here is a possible workflow: # Admin: Create SSH keys # Admin: Setup Heketi service - Heketi configured with private SSH key. # Admin: Raw nodes are setup only with the gluster service and the public ssh key. # Admin: Create topology.json with clusters, nodes, zones, and devices. Admin needs to create a topology.json file. See example in https://github.com/heketi/vagrant-heketi/blob/master/roles/heketi/files/topology_libvirt.json * gdeploy topology load -json=topology.json - Assume that the configuration of the location of the Heketi server is known, either by an environment variable, configuration file, or switch. - At this point Heketi has been loaded with the configuration of the data center. # Display topology * gdeploy topology show Cluster [2345235] |- Node [my.node.com] |- Device [/dev/sdb] |- Device [/dev/sdc] Cluster [F54DD] |- Node... ... # Display node information * gdeploy node info [hostname or uuid] # Create a volume * gdeploy volume create -size=100 # Create volumes from a configuration file * gdeploy volume create -c volumes.conf $ cat volumes.conf [volume] action=""> volname=Gdeploy_test <-- optional transport=tcp,rdma <-- would need to be added to Heketi replica=yes replica_count=2 [clients] action=""> #volname=glustervol (If not specified earlier in 'volume' section hosts=node2.redhat.com fstype=glusterfs client_mount_points=/mnt/gluster # Set volume options, snapshots, etc. These would first talk to Heketi to determine which servers are servicing this volume. gdeploy can then communicate with glusterd to execute the volume modifications. * gdeploy volume options <vol name and cluster | UUID> <options=val> * gdeploy volume options <vol name and cluster | UUID>-c options.conf # Destroy a volume Here gdeploy would first check for snapshots. If there are none, then it would request the work from Heketi. These are just some possible methods of how they could interact. - Luis On 12/11/2015 02:16 AM, Sachidananda
URS wrote:
|
_______________________________________________ Gluster-devel mailing list Gluster-devel@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-devel