Re: Volume management proposal (4.0)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> As I read this I assume this is to ease administration, and not to ease
> the code complexity mentioned above, right?
> 
> The code complexity needs to be eased, but I would assume that is a by
> product of this change.

Correct.  The goal is an easy-to-understand way for *users* to create
and administer volumes that address the complexity of multiple storage
types and workloads.  Cleaning up the volgen mess is just a (welcome)
side effect.

> > (B) Each volume has a graph representing steps 6a through 6c above (i.e.
> > up to DHT).  Only primary volumes have a (second) graph representing 6d
> > and 7 as well.
> 
> Do we intend to break this up into multiple secondary volumes, i.e an
> admin can create a pure replicate secondary volume(s) and then create a
> further secondary volume from these adding, say DHT?

Yes, absolutely.  Once this is implemented, I expect to see multi-level
hierarchies quite often.  The most common use case would probably be for
tiering plus some sort of segregation by user/workload.  For example:

   tenant -+- tier -+- DHT + AFR/NSR on SSDs
           |        |
           |        +- tier -+- DHT + AFR/NSR on disks
           |                 |
           |                 +- DHT + EC on disks
           |
           +- tier -+- DHT + AFR/NSR
                    |
                    +- DHT + EC

Here we'd have five secondary volumes using DHT plus something else.  A
user could set options on them, add bricks to them, rebalance them, and
so on.  The three "tier" volumes are also secondary, composed from the
first five.  They would almost have to set options separately on each
one to define different tiering policies.  Finally we have the "tenant"
volume, which segregates by user/workload and is  composed of the top
two tier volumes.  This is the only one that gets a full
performance-translator stack pushed on top, the only one that can be
explicitly started/stopped, and the only one that shows up in volume
status by default.        
   
> I ask this for 2 reasons,
> If we bunch up everything till 6c, we may not reduce admin complexity
> when creating volumes that involve multiple tiers, so we should/could
> allow creating secondary volumes and then further secondary volumes.
> 
> If we do _not_ bunch up then we would have several secondary volumes,
> then the settings (as I think about it) for each secondary volume
> becomes a bit more non-intuitive. IOW, we are dealing with a chain of
> secondary volumes and each with its own name, and would initiate admin
> operations (like rebalance) on possibly each of these. Not sure if I am
> portraying the complexity that I see well here.

Yes, there is still some complexity.  For example, a "rebalance" on a
DHT volume really does rebalance.  A "rebalance" on a "tenant" volume is
more of a reassignment/migration.  Both are valuable.  A user might wish
to do them separately, so it's important that we expose both *somehow*.
Exposing the DHT subtree as a secondary volume seems like an intuitive
way to do that, but there are others.

> Maybe a brief example of how this works would help clarify some thoughts.

Besides the above, here's a SWAG of what the CLI commands might look
like:

    # Create the three "base" secondary volumes for userA.
    volume create userA-fast replica 2 host1:brick1 ...
    volume create userA-medium replica 2 host2:brick2 ...
    volume create userA-slow disperse 8 host3:brick3 ...

    # Combine those into userA's full config.
    volume create userA-lower tier userA-medium userA-slow
    volume create userA tier userA-fast userA-lower

    # Now create user B's setup.
    volume create userB-fast replica 2 host4:brick4 ...
    volume create userB-slow disperse 8 host5:brick5 ...
    volume create userB tier userB-fast userB-slow

    # Combine them all into one volume and start the whole thing.
    volume create allusers tenant userA userB
    volume start allusers

So much for creation.  What about administrative actions later?

    # Add some space to user A's slow tier.
    volume add-brick userA-slow host6:brick6
    volume rebalance userA-slow

    # Reallocate space between user A and user B.
    volume set allusers quota-userA 40%
    volume set allusers quota-userB 60%
    volume rebalance allusers

Does that help?
_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
http://supercolony.gluster.org/mailman/listinfo/gluster-devel




[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux