The Manila RFEs and why so

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Dear GlusterFS community,

please let us (the Manila Team) put forward the list of
features we'd need fromGlusterFS. We filed them as RFEs to
the Bugzilla; let me present them grouped in three groups:

- Directory level operations:

    Bug 1226207 – [RFE] directory level snapshot create
       https://bugzilla.redhat.com/show_bug.cgi?id=1226207
    
    Bug 1226210 – [RFE] directory level snapshot clone
        https://bugzilla.redhat.com/show_bug.cgi?id=1226210
    
    Bug 1226220 – [RFE] directory level SSL/TLS auth
       https://bugzilla.redhat.com/show_bug.cgi?id=1226220
    
    Bug 1226788 – [RFE] per-directory read-only access
       https://bugzilla.redhat.com/show_bug.cgi?id=1226788

- Smart volume management: 

    Bug 1226772 – [RFE] GlusterFS Smart volume management
       https://bugzilla.redhat.com/show_bug.cgi?id=1226772

- Query features:
    
    Bug 1226225 – [RFE] volume size query support
       https://bugzilla.redhat.com/show_bug.cgi?id=1226225
    
    Bug 1226776 – [RFE] volume capability query
       https://bugzilla.redhat.com/show_bug.cgi?id=1226776

Let me provide a little background so that you can make
sense of this categories.

Manila is the filesystem provisioning service of
OpenStack. In Manila, a provisionable file tree is called
a share. In general, when implementing shares with a
particular backend we have to find out what kind of entity
on the backed will be mapped to a share. With GlusterFS,
we gave two different answers to that: one we call
"vol[ume] mapped share layout", the other is "dir[ectory]
mapped share layout". With vol mapped, a complete volume
will back a given share; with dir mapped, a top level
directory of a given volume constitutes a share
(theoretically speaking, there is a pool of volumes within
which the share backing directories are created; however,
as of the current implementation, the pool size is one).

There could be a discussion of the overall merit and
inherent advantages / limitations / trade-offs presented
by the two layouts. I wish we were there to have that
discussion. Instead, the truth is that we are juggling
with two layouts because neither of them allow us to
provide a feature complete Manila share driver
implementation. So we are kind of providing both and give
the user a trade off with respect to partial sets of core
functionality.

To make the dir mapped share layout feature complete, we
need directories that behave more like volumes; those
behavioral aspects are collected in the "Directory level
operations group".

To make the vol mapped share layout feature complete, we
need volumes that behave more like directories at least
in terms of dynamic creation. To create a directory,
the only thing you have to provide is the name of it and
there you go. We need a volume creation operation that
creates a volume just by passing the name and the prospective
size of it. So the RFEs of this kind (that single one) are
in the second group, "Smart volume management".

The third group, "Query features" is share layout
agnostic, it's about high level query features that can
make the integration between Manila (or other cloud
services) and GlusterFS less cumbersome than it is now.

(

FYI -- workarounds:

For dir mapped layout, there is no workaround for the missing
features.

For vol mapped layout, lack of smart volume management is
worked around by using a pool of pre-existing gluster volumes
for backing the shares.

For queries:
- lack of proper capability queries is worked around in
  various ad-hoc ways, like checking version numbers, or
  trial-and-error
- lack of size query is worked around by having a naming
  convention for volumes agreed between Manila and GlusterFS
  ends that provides info about their size. Also we could
  do a service mount of the volume and do a df(1) but that's
  cumbersome and does not scale.

)

So from a high level perspective, the quest is to collate
volumes and directories to some degree -- from either end.
We are not inherently biased towards either layout /
collating approach; our priority is to have one that
works well. So the primary question: which could be
delivered sooner, and in what timeframe? (We would ideally
integrate the GlusterFS features for Liberty, which means
they'd need to be delivered early August.)

(

FYI -- efforts so far and perspectives as of my
understanding:

As noted, the "Smart volume management" group is a
singleton, but that single element is tricky. We
have heard promises of a glusterd rewrite that would
include the intelligence / structure for such a feature;
also we toyed around implementing a partial version of
it with configuration management software (Ansible) but
that was too experimental (the whole concept) to dedicate
ourselves to it, so we discontinued that.

OTOH, the directory level features are many but can
possibly be addressed with a single well chosen volume
variant (something like lv-s for all top level
directories?) -- plus the UI would needed to be tailored
to them.

The query features are not vital but have the advantage
of being simpler (especially the size query) which would
be a reason for putting them on the schedule.

)

What we would like: prioritize between "Directory level
operations" and "Smart volume management" and make a plan
for that and execute that plan.

Thanks
Csaba
_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-devel





[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux