Re: Release 3.10 feature proposal : Gluster Block Storage CLI Integration

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 12/21/2016 09:15 PM, Prasanna Kalever wrote:
[Top posting]

I agree with Niels and Shyam here. We are now trying to decouple the
gluster-block cli from gluster cli.
Since anyway it doesn't depend on core gluster changes, I think its
better to move it out. Also I do not see a decent tool/util that does
these jobs, hence its better we make it as a separate project (may be
gluster-block).
The design side changes are still in discussion, I shall give an
update once we conclude on it.

Since gluster-block plans to maintain it as separate project, I don't
think we still need to make it as 3.10 feature.
With gluster-block we will aim to support all possible versions of gluster.

Should this be bundled with gluster? If so, it maybe a good thing to track that part against gluster releases (3.10 or otherwise). Just a thought.


Thanks,
--
Prasanna


On Mon, Dec 19, 2016 at 5:10 PM, Shyam <srangana@xxxxxxxxxx> wrote:
On 12/14/2016 01:38 PM, Niels de Vos wrote:

On Wed, Dec 14, 2016 at 12:40:53PM +0530, Prasanna Kumar Kalever wrote:

On 16-12-14 07:43:05, Niels de Vos wrote:

On Fri, Dec 09, 2016 at 11:28:52AM +0530, Prasanna Kalever wrote:

Hi all,

As we know gluster block storage creation and maintanace is not simple
today, as it involves all the manual steps mentioned at [1]
To make this basic operations simple we would like to integrate the
block story with gluster CLI.

As part of it, we would like Introduce the following commands

# gluster block create <NAME>
# gluster block modify <SIZE> <AUTH> <ACCESS MODE>
# gluster block list
# gluster block delete <NAME>


I am not sure why this needs to be done through the Gluster CLI.
Creating a file on a (how to select?) volume, and then export that as a
block device through tcmu-runner (iSCSI) seems more like a task similar
to what libvirt does with VM images.


May be not exactly, but similar


Would it not be more suitable to make this part of whatever tcmu admin
tools are available? I assume tcmu needs to address this, with similar
configuration options for LVM and other backends too. Building on top of
that may give users of tcmu a better experience.


s/tcmu/tcmu-runner/

I don't think there are separate tools/utils for tcmu-runner as of now.
Also currently we are using tcmu-runner to export the file in the
gluster volume as a iSCSI block device, in the future we may move to
qemu-tcmu (which does the same job of tcmu-runner, except it uses
qemu gluster driver) for benefits like snapshots ?


One of the main objections I have, is that the CLI is currently very
'dumb'. Integrating with it to have it generate the tcmu-configuration
as well as let the (current management only!) CLI create the disk-images
on a volume seem breaking the current seperation of tasks. Integrations
are good to have, but they should be done on the appropriate level.

Teaching the CLI all it needs to know about tcmu-runner, including
setting suitable permissions on the disk-image on a volume, access
permissions for the iSCSI protocol and possibly more seems quite a lot
of effort to me. I prefer to keep the CLI as simple as possible, and any
integration should use the low-level tools (CLI, gfapi, ...) that are
available.


+1, I agree. This seems more like a task for a tool using gfapi in parts for
the file creation and other CLI/deploy options for managing tcmu-runner. The
latter a more tcmu project, or gluster-block as the abstraction if we want
to gain eyeballs into the support.



When we integrate tcmu-runner now, people will hopefully use it. That
means it can not easily be replaced by an other project. qemu-tcmu would
be an addition to the tcmu-integration, leaving a huge maintainance
burdon.

I have a strong preference to see any integrations done on a higher
level. If there are no tcmu-runner tools (like targetcli?) to configure
iSCSI backends and other options, it may make sense to start a new
project dedicated to iSCSI access for Gluster. If no suitable projects
exist, a gluster-block-utils project can be created. Management
utilities also benefit from being written in languages other than C, a
new project offers you many options there ;-)

Also configuring and running tcmu-runner on each node in the cluster
for multipathing is something not easy (take the case where we have
more than a dozen of node). If we can do these via gluster CLI with
one simple command from any node, we can configure and run tcmu-runner
on all the nodes.


Right, sharing configurations between different servers is tricky. But
you can also not assume that everyone can or want to run the iSCSI
target on the Gluster storage servers themselves. For all other
integrations that are similar, users like to have the flexibility to run
the additional services (QEMU, Samba, NFS-Ganesha, ..) on seperate
systems.

If you can add such a consideration in the feature page, I'd appreciate
it. Maybe other approaches have been discussed earlier as well? In that
case, those approaches should probably be added too.


Sure!


We may be missing something, so beefing up the feature page would possibly
help understand the gaps. As of now adding these to the gluster CLI seems
like the incorrect place to integrate this capacity.



Thanks! I hope I explained my opinion well, and you take it into
consideration.

Cheers,
Niels



--
Prasanna


Thanks,
Niels




[1]
https://pkalever.wordpress.com/2016/06/23/gluster-solution-for-non-shared-persistent-storage-in-docker-container/


Thanks,
--
Prasanna
_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-devel





_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-devel
_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-devel



[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux