Re: replica performance and brick size best practice

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



According to RH,
the most optimal would be to have:
- Disk size: 3-4TB (faster resync after failure)
- Disk count: 10-12
- HW raid : As you can also see on the picture that the optimal one for writes is RAID10 https://community.hpe.com/t5/servers-systems-the-right/what-are-raid-levels-and-which-are-best-for-you/ba-p/7041151

The full stripe size should be between 1MB and 2MB (prefer staying closer to the 1MB).

I'm not sure of the HW Raid controller capabilities, but I would also switch the I/O scheduler to 'none' (First-In First-out while merging the requests).Enaure that you have a battery-backed cache and the cache ratio of the controller is leaning towards the writes (something like 25% read, 75% write).

Jumbo Frames are recomended but not mandatory.Still, they will reduce the number of packets processed by your infrastructure which is always benefitial.

Tuned Profile:
You can find the tuned profiles that were usually shipped with Red Hat'sGluster Storage at https://ftp.redhat.com/redhat/linux/enterprise/7Server/en/RHS/SRPMS/redhat-storage-server-3.5.0.0-8.el7rhgs.src.rpm

I will type the contents of the random-io profile here, so please double check it for typos.

# /etc/tuned.d/rhgs-random-io/tuned.conf:
[main]
include=throughput-performace

[sysctl]
vm.dirty_ratio = 5
vm.dirty_background_ratio = 2

Don't forget to install tuned before that.

For small files , Follow the guidelines from https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.5/html/administration_guide/small_file_performance_enhancements

Note: Do not use Gluster v9 and update your version to the latest minor one (for example if you use v10 -> update to 10.3). In Gluster v10 a major improvement was done for small files and v9 is out of support now.

For the XFS: Mount the bricks with 'noatime'. If you use SELINUX , use the following:
noatime,context="system_u:object_r:glusterd_brick_t:s0"
Also, consider setting gluster's option  'cluster.min-free-disk' to something that makes sense for you (for details run 'gluster volume set help').


Of course do benchmarking with the application itself and both before and after you made a change.

Best Regards,
Strahil Nikolov 



On Mon, Nov 14, 2022 at 13:33, beer Ll
<llcfhllml@xxxxxxxxx> wrote:
________



Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users
________



Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux