Re: Questions about the limitations on using Gluster Volume Tiering.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 2 May 2017 at 01:01, Jeff Byers <jbyers.sfly@xxxxxxxxx> wrote:
Hello,

We've been thinking about giving GlusterFS Tiering a try, but
had noticed the following limitations documented in the:

    Red Hat Gluster Storage 3.2 Administration Guide

    Limitations of arbitrated replicated volumes:

        Tiering is not compatible with arbitrated replicated volumes.

    17.3. Tiering Limitations

        In this release, only Fuse and NFSv3 access is supported.
        Server Message Block (SMB) and NFSv4 access to tiered
        volume is not supported.

I don't quite understand the SMB restriction. Is the restriction
that you cannot use the GlusterFS 'gfapi' vfs interface to Samba,
but you can use Samba layered over a FUSE mount?

Is the problem here that with the 'gfapi' vfs interface, the
'tier-xlator' is not involved, or does not work properly?

BTW, my colleague did a quick test using SMB with 'libgfapi',
configured, and it seemed to work fine, but that doesn't mean
that it was working correctly.

The same questions regarding NFSv3 vs NFSv4. My understanding
is that NFSv3 is supported internally by GlusterFS, but NFSv4
is external. That would make me think that NFSv3 would have a
problem with tiering, but it is NFSv4 that is not supported, but
it is the opposite.

I guess I don't understand what's behind these limitations.

We ran into some bugs while testing this with tiering some time ago. I don't think we tried after that so we cannot certify it as yet, even though there have been changes after that. Can the Samba and Ganesha folks update on whether this has been tried recently?
 
Related question, the tiering operates on volume files, not
brick files, so tiering should be compatible with sharding?

Again, this has never been tried.  However, tiering uses the rebalance migration code and we have had users report issues when running rebalance on sharded volumes, so this is probably not a good idea at the moment.

In a scale-out configuration, I assume that the heat
map/counters are shared globally so that no matter where the
client(s) read/write to/from, they get counted properly in the
heat counts, and get the correct file.

There must be some place that stores this meta-data. Is
this meta-data shared between all of the GlusterFS nodes,
does it go on a GlusterFS meta-data volume? I didn't see
any way to specify the storage location, but I suppose it
could go in a bricks .glusterfs/ directory, but isn't that is
per-brick, not per-volume.

The heat count metadata is stored in a db on each brick by the CTR translator. Each tier process reads the information from these bricks and acts accordingly. 

- Nithya

Thanks.

~ Jeff Byers ~
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-users

_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux