Re: Poor performance with shard

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



>From my understanding of gluster, that is to be expected since instead
of having to stat a single file without sharding, now you have to stat
multiple files when you shard. Remember that gluster is not so great
at dealing with "lots" of files, so if you have a single 100GB
file/image stored in gluster, and it gets sharded into 512MB pieces,
you are going to now have to stat ~195 files instead of a single file.
The more files you have to stat, the slower gluster is, specially in
replica situations since my understanding is that you have to stat
each file in each brick.

On the other hand, if you had a single file and one of your nodes is
rebooted, the i/o to the whole 100GB is stopped while healing if you
are *not* using sharding, so you have to wait for the whole 100GB to
be synced without sharding. It is a trade off that you will have to
evaluate, single file vs sharded and performance when healing.

If you are storing VM images, you may want to look into applying the
gluster settings for VMs:

https://github.com/gluster/glusterfs/blob/master/extras/group-virt.example

This may help improve performance, but I think you will still have
higher throughput with a single file vs sharded, whereas you will have
faster healing with sharding, as you only heal the modified shards.
Also make sure to be careful because once sharding is enabled and you
have sharded files, if you disable it, it will corrupt your sharded
vms.

Diego

On Sun, Sep 3, 2017 at 12:22 PM, Roei G <ganor.roei98@xxxxxxxxx> wrote:
> Hey everyone!
> I have deployed gluster on 3 nodes with 4 SSDs each and 10Gb Ethernet
> connection.
>
> The storage is configured with 3 gluster volumes, every volume has 12 bricks
> (4 bricks on every server, 1 per ssd in the server).
>
> With the 'features.shard' off option my writing speed (using the 'dd'
> command) is approximately 250 Mbs and when the feature is on the writing
> speed is around 130mbs.
>
> --------- gluster version 3.8.13 --------
>
> Volume name: data
> Number of bricks : 4 * 3 = 12
> Bricks:
> Brick1: server1:/brick/data1
> Brick2: server1:/brick/data2
> Brick3: server1:/brick/data3
> Brick4: server1:/brick/data4
> Brick5: server2:/brick/data1
> .
> .
> .
> Options reconfigure:
> Performance.strict-o-direct: off
> Cluster.nufa: off
> Features.shard-block-size: 512MB
> Features.shard: on
> Cluster.server-quorum-type: server
> Cluster.quorum-type: auto
> Cluster.eager-lock: enable
> Network.remote-dio: on
> Performance.readdir-ahead: on
>
> Any idea on how to improve my performance?
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users@xxxxxxxxxxx
> http://lists.gluster.org/mailman/listinfo/gluster-users
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-users



[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux