On Wed, Jan 26, 2022, 7:02 AM <gluster-users-request@xxxxxxxxxxx> wrote:
Send Gluster-users mailing list submissions to
gluster-users@xxxxxxxxxxx
To subscribe or unsubscribe via the World Wide Web, visit
https://lists.gluster.org/mailman/listinfo/gluster-users
or, via email, send a message with subject or body 'help' to
gluster-users-request@xxxxxxxxxxx
You can reach the person managing the list at
gluster-users-owner@xxxxxxxxxxx
When replying, please edit your Subject line so it is more specific
than "Re: Contents of Gluster-users digest..."
Today's Topics:
1. Re: Is such level of performance degradation to be expected?
(Strahil Nikolov)
----------------------------------------------------------------------
Message: 1
Date: Wed, 26 Jan 2022 06:01:03 +0000 (UTC)
From: Strahil Nikolov <hunter86_bg@xxxxxxxxx>
To: mygluster22@xxxxxx, gluster-users@xxxxxxxxxxx
Subject: Re: Is such level of performance degradation
to be expected?
Message-ID: <1298200402.1061411.1643176863994@xxxxxxxxxxxxxx>
Content-Type: text/plain; charset="utf-8"
Well,
the idea of mentioning all these is to point out that you missed the basics of setting up GlusterFS and you should start from scratch.
Synthetic benchmarks are useless as you need to test with your workload. When you don't get what you need -> the next step is profiling and tuning. Profiling will be useless for synthetics...
I guess you didn't notice that you can control the ammount of server and client threads. Too few and performance will be low, too much and contention locking occurs. Sharing the volume options would have helped ;)
sysctl dirty settings are important , as all writes go "dirty" before they are flushed to disk. Setting a lower limit for starting to flush will reduce potential issues.
Also, don't expect miracles as you use the FUSE client. If you seek higher performance (after tuning everything else) - you can use the libgfapi (for example NFS-Ganesha is such).
Best Regards,Strahil Nikolov
On Mon, Jan 24, 2022 at 15:30, Sam<mygluster22@xxxxxx> wrote: Thanks for your response Strahil.
> Usually synthetic benchmarks do not show anything, because gluster has to be tuned to your real workload and not to a synth.
I understand that they do not paint the real picture. But doing same benchmark between a set of file-systems on same server should be able to throw results that can be compared?
> Also, RH recommends disks of 3-4TB each in a HW raid of 10-12 disks with a stripe size between 1M and 2M.
Next, you need to ensure that hardware alignment is properly done.
Gluster isn't interacting with the underlying RAID device here so that shouldn't matter. If the XFS layer just below gluster is giving me 3.5 GB/s random reads and writes (--rw=randrw --direct=1),
If it is hard drive raid, random IO cannot be this fast.
BTW, for random IO, you might want to use IOPS, as IO size doesn't matter much.
Best,
Manhong
why Gluster above it is struggling at 130 MB/s on the same RAID setup. That is 27 times slower.
I understand that Gluster volume may perform better when its bricks are distributed on different nodes but the fact that its performance penalty when compared to file-system its residing on it is so much high doesn't inspire much confidence.
I may be wrong here but system settings, cache settings, raid cache etc. shouldn't have any play here as its parent file-system is performing perfectly fine with the default settings.
- Sam
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20220126/11f57d4a/attachment-0001.html>
------------------------------
Subject: Digest Footer
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users
------------------------------
End of Gluster-users Digest, Vol 165, Issue 13
**********************************************
________ Community Meeting Calendar: Schedule - Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC Bridge: https://meet.google.com/cpu-eiue-hvk Gluster-users mailing list Gluster-users@xxxxxxxxxxx https://lists.gluster.org/mailman/listinfo/gluster-users