Cluster with NFS

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Sep 27, 2013 at 10:30 AM, John Mark Walker <johnmark at gluster.org> wrote:
>
> ________________________________
>
> Thanks Marcus.
>
> I understand ... The Native Gluster client have very poor performance ...
>
>
> I would be curious what your use case is and why performance is bad.
>

I would be curious too I've seen exactly the reverse although I will
admit there seams to be some cpu overhead on the client so could that
be what you are referring to?

>
> If i use NFS i could mount on an IPFailover. If one of the fileserver is
> down, the IPfailover switch one the second Fileserver (not managed by
> Gluster Client but the only the network).
>
> Yes, you would have to implement failover. There are multiple ways to do it.

there are lots of ways the easiest method Ive used is an LVS  load
ballancer with a floating VIP managed by keepalived; however there are
a lot of tools for doing this,

>
> Next test, i will install debian-zfs package and try to use this filesystem
> with GlusterFS.
>
>
> This seems to be getting more popular. Would love to see how it works.
>

I know ZFS is the cool new thing now in software raid filesystems but
be warned its still in its infancy on Linux and it has its own
performance issues most notably with the zil. right now you will
probably get better performance with XFS and either a good hardware
raid controller or a native Linux software raid if its tuned properly.
here is also an other thing to consider if you have gluster properly
configured and you have good monitoring in place like a properly
configured Nagios instance you could have the monitoring system remove
bricks on failed or failing disks automatically and re-balance the
cluster esspecially if you are doing a 3 node configuration and
utilizing the third node for quorum. A simple script could also be
added to reformat a disk, re-add the brick and re-balance the cluster
after the disk is replaced. In such a configuration you wouldn't
really need a raid at all because all your redundancy would be handled
by gluster.


> -JM
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users


[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux