Hi all,
This mail is open for discussion on gluster block store integration with heketi and its REST API interface design constraints.
___ Volume Request ...
|
|
PVC claim -> Heketi --->|
|
|
|
|
| __ BlockCreate | |
| |__ BlockInfo | |
|___ Block Request (APIS)-> |__ BlockResize
|
|__ BlockList
|
|__ BlockDelete
Heketi will have block API and volume API, when user submit a Persistent volume claim, Kubernetes provisioner based on the storage class(from PVC) talks to heketi for storage, heketi intern calls block or volume API's based on request.
With my limited understanding, heketi currently creates clusters from provided nodes, creates volumes and handover them to the user.
For block related API's, it has to deal with files right ?
Here is how block API's look like in short-
Create: heketi has to create file in the volume and export it as a iscsi target device and hand it over to user.
Info: show block stores information across all the clusters, connection info, size etc.
resize: resize the file in the volume, refresh connections from initiator side
List: List the connections
Delete: logout the connections and delete the file in the gluster volume
Couple of questions:
1. Should Block API have sub API's such as FileCreate, FileList, FileResize, File delete and etc then get it used in Block API as they mostly deal with files.
2. How do we create the actual file in the volume, meaning using FUSE mount (which may involve an extra process running) or gfapi, again if gfapi should we go with c API's, python bindings or go bindings ?
3. Should we get targetcli related (LUN exporting) setup done from heketi or do we seek help from gdeploy for this ?
Thoughts?
Note: nothing is fixed as put in this mail, its all just part of initial discussions.
Cheers,
--
Prasanna
Prasanna
_______________________________________________ Gluster-devel mailing list Gluster-devel@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-devel