Re: is it possible an active-active NFS server?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 06/28/2010 01:28 PM, ESGLinux wrote:


2010/6/28 Gordan Bobic <gordan@xxxxxxxxxx <mailto:gordan@xxxxxxxxxx>>

    On 06/28/2010 12:26 PM, ESGLinux wrote:



        2010/6/28 Gordan Bobic <gordan@xxxxxxxxxx
        <mailto:gordan@xxxxxxxxxx> <mailto:gordan@xxxxxxxxxx
        <mailto:gordan@xxxxxxxxxx>>>


            On 06/28/2010 11:11 AM, ESGLinux wrote:

                Hi All,

                I´m going to mount an active-active file server and my first
                idea is to
                mount a NFS service with luci but now I have the doubt
        if is it
                  possible. with luci i have allways mounted Active-Passive
                services. So,
                my question is that.

                Any other aproach to get an Active-Active file Server?



            Not with NFS, since NFS has no feature to have multiple
            servers/share. But there is no reason you can't connect half
        of the
            clients to the other server.


        I haven't realized about it, it could be a solution.

        one thing, I have been investigating about it, and  I have
        thought it
        could be possible using Linux Virtual Server (administered with
        piranha), what do you think about it?


    I think you need to start to list your requirements in a coherent
    manner first, in terms of performance, features, and redundancy. The
    solution you should be looking for will be more obvious then.



Hi again,

you are right this is a bit confusing (my customer said me: I need a
cluster file server, its your problem... :-/ ). Now I´m investigating
how to do it.

what basically I need is:

I need to access the files in HA and it must be scalable. If the load is
a problem I could be able to add another node to solve the load problem,
(so I thought in a Active-active, because with Active-Pasive only there
is one node active so the load problem is still there)

Whether it will scale is dependant almost exclusively on your access pattern. If you can group your cluster file system accesses so that nodes hardly ever access the same file system subtrees then it will scale reasonably well. If you are going to have nodes randomly accessing the file system paths, then the performance will take a nosedive, and get progressively slower as you add nodes.

This will scale linearly:
Node 1 accessing /my/path/1/whatever
Node 2 accessing /my/path/2/whatever

This will scale inversely (get slower):
Node 1 accessing /my/path
Node 2 accessing /my/path

Cluster file systems are generally slower at random access than standalone file systems, so you are likely to find that having a standalone failover (active-passive) solution is faster than a clustered active-active solution, especially as you add nodes.

So the question really comes down to access patterns. If you are going to have random access to lots of small files (e.g. Maildir), the performance will be poor to start with and get worse as you add nodes unless you can engineer your solution so that access for a particular subtree always hits the same node. OTOH for large file operations, the bandwidth will be more dominant than random access lock acquisition time, so the performance will be OK and scale reasonably as you add nodes.

Note that this isn't something specific to GFS - pretty much all cluster file systems behave this way.

Gordan

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster



[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux