Re: Settings for VM hosting

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I have setup my storage for my nodes (also replica 3, but distributed
replicated volume with some more nodes) just some weeks ago based on the
"virt group" as recommended ... and here is mine:

cluster.choose-local: off
user.cifs: off
features.shard: on
cluster.shd-wait-qlength: 10000
cluster.shd-max-threads: 8
cluster.locking-scheme: granular
cluster.data-self-heal-algorithm: diff
cluster.server-quorum-type: server
cluster.quorum-type: auto
cluster.eager-lock: enable
network.remote-dio: enable
performance.low-prio-threads: 32
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
performance.client-io-threads: off
nfs.disable: on
transport.address-family: inet
cluster.granular-entry-heal: enable

I only changed the data-self-heal-algorithm because CPU is not limiting
that much on my nodes so I chose that over bandwith (based on my
understanding of the docs).

I have some more nodes, so sharding will better distribute the data
between my nodes

Ingo


Am 18.04.19 um 15:13 schrieb Martin Toth:
> Hi,
> 
> I am curious about your setup and settings also. I have exactly same setup and use case.
> 
> - why do you use sharding on replica3? Do you have various size of bricks(disks) pre node?
> 
> Wonder if someone will share settings for this setup.
> 
> BR!
> 
>> On 18 Apr 2019, at 09:27, lemonnierk@xxxxxxxxx wrote:
>>
>> Hi,
>>
>> We've been using the same settings, found in an old email here, since
>> v3.7 of gluster for our VM hosting volumes. They've been working fine
>> but since we've just installed a v6 for testing I figured there might
>> be new settings I should be aware of.
>>
>> So for access through the libgfapi (qemu), for VM hard drives, is that
>> still optimal and recommended ?
>>
>> Volume Name: glusterfs
>> Type: Replicate
>> Volume ID: b28347ff-2c27-44e0-bc7d-c1c017df7cd1
>> Status: Started
>> Snapshot Count: 0
>> Number of Bricks: 1 x 3 = 3
>> Transport-type: tcp
>> Bricks:
>> Brick1: ips1adm.X:/mnt/glusterfs/brick
>> Brick2: ips2adm.X:/mnt/glusterfs/brick
>> Brick3: ips3adm.X:/mnt/glusterfs/brick
>> Options Reconfigured:
>> performance.readdir-ahead: on
>> cluster.quorum-type: auto
>> cluster.server-quorum-type: server
>> network.remote-dio: enable
>> cluster.eager-lock: enable
>> performance.quick-read: off
>> performance.read-ahead: off
>> performance.io-cache: off
>> performance.stat-prefetch: off
>> features.shard: on
>> features.shard-block-size: 64MB
>> cluster.data-self-heal-algorithm: full
>> network.ping-timeout: 30
>> diagnostics.count-fop-hits: on
>> diagnostics.latency-measurement: on
>> transport.address-family: inet
>> nfs.disable: on
>> performance.client-io-threads: off
>>
>> Thanks !
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users@xxxxxxxxxxx
>> https://lists.gluster.org/mailman/listinfo/gluster-users
> 
> _______________________________________________
> Gluster-users mailing list
> Gluster-users@xxxxxxxxxxx
> https://lists.gluster.org/mailman/listinfo/gluster-users
>
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users



[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux