Re: Latest NFS-Ganesha Gluster Integration docs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




На 30 юни 2020 г. 3:02:32 GMT+03:00, WK <wkmail@xxxxxxxxx> написа:
>
>On 6/28/2020 8:52 PM, Strahil Nikolov wrote:
>> Last time I did storhaug+NFS-Ganesha I used
>https://github.com/gluster/storhaug/wiki .
>
>Well, that certainly helps but since i have no experience with Samba, I
>
>guess I have to learn about ctdb
>
>What I see are lots of layers here. Even a simple graphic would help, 
>but I guess I will just have to soldier through it.
>
>
>> I guess you can setup NFS-Ganesha without HA and check the
>performance before  proceeding further.
>
>yes, I setup a simple NFS-Ganesha single node and have begun to play 
>with that using an XFS store. Pretty Straightforward.

You can also consider the legacy NFS (a.k.a gNFS) which has to be built from source (also rpms can be built from the source). Using  HAproxy (on another system) in a  pacemaker resource,  you will be able to fail over between gluster replicas while using NFS.Some  people report the built in gNFS to be quite performant.

>Next step would be to use the Gluster Storage Driver. Then figure out 
>the HA part of Storhaug/CTDB and how well it can be run in a 
>hyperconverged scenario.
>
>Not exactly like the QuickStart on the Gluster docs though <Grin>
>
Any PRs are welcomed :D

>>
>> Have you tuned your I/O scheduler,  tuned profile , aligned your PV
>,etc ? There  are  alot of stuff that can improve your Gluster.
>
>yes, we have been doing this awhile (since Gluster 3.3) and do tuning. 
>Again, Our Gluster performance isn't 'bad' from our perspective. We are
>
>just looking to see if there are some noticeable gains to be made with 
>NFS vs FuseMount.
>
>I suppose if we hadn't seen so many complaints about Fuse on the
>mailing 
>list we wouldn't have thought much about it <Grin>.
>
>Of course with lots of small files we have alwayss used MooseFS (since 
>1.6), as that is Glusters weakness.  They make a good combination of
>tools.
>
>
>>
>> Also, you can check the settings in /var/lib/glusterd/groups/virt  .
>The  settings  are used by oVirt/RHV and are the optimal settings for a
>Virtualization.
>
>yes we always enable Virt settings and they make a big difference.
>
>
>> P.S.: Red Hat support Hyperconverged Infrastructure with 512MB
>shards, while the default shard size is 64MB. You can test on another
>volume setting a bigger shard size.
>>
>Yes we noticed a while back that there was a  discrepancy between the 
>RedHat docs saying bigger shards are better (i.e. 512MB) and 64MB on
>the 
>virt group. We have played with different settings but didn't really 
>notice much of a difference. You get a smaller number of heals but they
>
>are bigger and take longer to sync.

As long as you use 'thick' disks for the VMs it won't matter much.

>Does anyone know why the difference and the reasoning involved?

I have no idea...

>-WK


Best Regards,
Strahil Nikolov
________



Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users




[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux