Re: GlusterFS performance for big files...

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



There is a 'virt' group optimized for virtual workloads.

Usually I recommend to start from ground up in order to optimize  on all  levels.

- I/O scheduler of the bricks (either (mq-)deadline or noop/none)
- CPU cstates
- Tuned profile (swappiness, dirty settings)
- MTU of the gluster network, the bigger the better
- Gluster tunables  (virt group is a good  start)


If your gluster nodes are actually in the cloud, it is recommended (at least for AWS) to use a stripe over 8 virtual disks for each brick.

Keep in mind that shard size on RH Gluster Storage is using 512MB while the default on community edition is 64MB.

Best Regards,
Strahil Nikolov

На 18 август 2020 г. 16:47:01 GMT+03:00, Gilberto Nunes <gilberto.nunes32@xxxxxxxxx> написа:
>>> What's your workload?
>I have 6 KVM VMs which have Windows and Linux installed on it.
>
>>> Read?
>>> Write?
>iostat (I am using sdc as the main storage)
>cavg-cpu:  %user   %nice %system %iowait  %steal   %idle
>           9.15    0.00    1.25    1.38    0.00   88.22
>
>Device            r/s     w/s     rkB/s     wkB/s   rrqm/s   wrqm/s 
>%rrqm
> %wrqm r_await w_await aqu-sz rareq-sz wareq-sz  svctm  %util
>sdc              0.00    1.00      0.00      1.50     0.00     0.00  
>0.00
>  0.00    0.00    0.00   0.00     0.00     1.50
>
>
>>> sequential? random?
>sequential
>>> many files?
>6 files  500G 200G 200G 250G 200G 100G size each.
>With more bricks and nodes, you should probably use sharding.
>For now I have only two bricks/nodes.... Plan for more is now out of
>the
>question!
>
>What are your expectations, btw?
>
>I ran many environments with Proxmox Virtual Environment, which use
>QEMU
>(not virt) and LXC...But I use majority KVM (QEMU) virtual machines.
>My goal is to use glusterfs since I think it's more resource demanding
>such
>as memory and cpu and nic, when compared to ZFS or CEPH.
>
>
>---
>Gilberto Nunes Ferreira
>
>(47) 3025-5907
>(47) 99676-7530 - Whatsapp / Telegram
>
>Skype: gilberto.nunes36
>
>
>
>
>
>Em ter., 18 de ago. de 2020 às 10:29, sankarshan <
>sankarshan.mukhopadhyay@xxxxxxxxx> escreveu:
>
>> On Tue, 18 Aug 2020 at 18:50, Yaniv Kaul <ykaul@xxxxxxxxxx> wrote:
>> >
>> >
>> >
>> > On Tue, Aug 18, 2020 at 3:57 PM Gilberto Nunes <
>> gilberto.nunes32@xxxxxxxxx> wrote:
>> >>
>> >> Hi friends...
>> >>
>> >> I have a 2-nodes GlusterFS, with has the follow configuration:
>> >> gluster vol info
>> >>
>>
>> I'd be interested in the chosen configuration for this deployment -
>> the 2 node set up. Was there a specific requirement which led to
>this?
>>
>> >> Volume Name: VMS
>> >> Type: Replicate
>> >> Volume ID: a4ec9cfb-1bba-405c-b249-8bd5467e0b91
>> >> Status: Started
>> >> Snapshot Count: 0
>> >> Number of Bricks: 1 x 2 = 2
>> >> Transport-type: tcp
>> >> Bricks:
>> >> Brick1: server02:/DATA/vms
>> >> Brick2: server01:/DATA/vms
>> >> Options Reconfigured:
>> >> performance.read-ahead: off
>> >> performance.io-cache: on
>> >> performance.cache-refresh-timeout: 1
>> >> performance.cache-size: 1073741824
>> >> performance.io-thread-count: 64
>> >> performance.write-behind-window-size: 64MB
>> >> cluster.granular-entry-heal: enable
>> >> cluster.self-heal-daemon: enable
>> >> performance.client-io-threads: on
>> >> cluster.data-self-heal-algorithm: full
>> >> cluster.favorite-child-policy: mtime
>> >> network.ping-timeout: 2
>> >> cluster.quorum-count: 1
>> >> cluster.quorum-reads: false
>> >> cluster.heal-timeout: 20
>> >> storage.fips-mode-rchecksum: on
>> >> transport.address-family: inet
>> >> nfs.disable: on
>> >>
>> >> HDDs are SSD and SAS
>> >> Network connections between the servers are dedicated 1GB (no
>switch!).
>> >
>> >
>> > You can't get good performance on 1Gb.
>> >>
>> >> Files are 500G 200G 200G 250G 200G 100G size each.
>> >>
>> >> Performance so far so good is ok...
>> >
>> >
>> > What's your workload? Read? Write? sequential? random? many files?
>> > With more bricks and nodes, you should probably use sharding.
>> >
>> > What are your expectations, btw?
>> > Y.
>> >
>> >>
>> >> Any other advice which could point me, let me know!
>> >>
>> >> Thanks
>> >>
>> >>
>> >>
>> >> ---
>> >> Gilberto Nunes Ferreira
>> >>
>> >> ________
>> >>
>> >>
>> >>
>> >> Community Meeting Calendar:
>> >>
>> >> Schedule -
>> >> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>> >> Bridge: https://bluejeans.com/441850968
>> >>
>> >> Gluster-users mailing list
>> >> Gluster-users@xxxxxxxxxxx
>> >> https://lists.gluster.org/mailman/listinfo/gluster-users
>> >
>> > ________
>> >
>> >
>> >
>> > Community Meeting Calendar:
>> >
>> > Schedule -
>> > Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>> > Bridge: https://bluejeans.com/441850968
>> >
>> > Gluster-users mailing list
>> > Gluster-users@xxxxxxxxxxx
>> > https://lists.gluster.org/mailman/listinfo/gluster-users
>>
>>
>>
>> --
>> sankarshan mukhopadhyay
>> <https://about.me/sankarshan.mukhopadhyay>
>>
________



Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users




[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux