On 10/01/2016 1:44 AM, Kyle Harris wrote:
I can make the change to sharding and then export/import the VMs to
give it a try. So just to be clear, I am using v3.7.6-1. Is that
sufficient? I would rather not have to compile from source and would
probably wait for the next rpms if that is needed.
Speaking not as a dev (I'm not), but as a tester/user, 3.7.6 will do for
testing & usage, thats what I am on. Write performance could be better
and I believe there are some fixes for that due in 3.7.7
Also, given the output below. what would you recommend I use for the
shard block size and furthermore, how do you determine this?
features.shard: on
features.shard-block-size: <size>
Where size takes std unit sizes, e.g 64M, 1G etc. Default is 4M. Shard
size also has some interesting implications for the upcoming SSD Tier
volumes (3.8). They are available in 3.7, if you are ok with regular
breakages :)
I bench tested a lot and didn't find all that much difference in results
for shard sizes. This is my current settings:
Volume Name: datastore1
Type: Replicate
Volume ID: 1261175d-64e1-48b1-9158-c32802cc09f0
Status: Started
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: vnb.proxmox.softlog:/vmdata/datastore1
Brick2: vng.proxmox.softlog:/vmdata/datastore1
Brick3: vna.proxmox.softlog:/vmdata/datastore1
Options Reconfigured:
features.shard: on
features.shard-block-size: 64MB
cluster.self-heal-window-size: 256
server.event-threads: 4
client.event-threads: 4
cluster.quorum-type: auto
cluster.server-quorum-type: server
performance.io-thread-count: 32
performance.cache-refresh-timeout: 4
nfs.disable: on
nfs.addr-namelookup: off
nfs.enable-ino32: off
performance.write-behind: on
performance.strict-write-ordering: on
performance.stat-prefetch: off
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
cluster.eager-lock: enable
network.remote-dio: enable
performance.readdir-ahead: on
performance.write-behind-window-size: 256MB
performance.cache-size: 256MB
--
Lindsay Mathieson
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users