On 18 October 2015 at 00:17, Vijay Bellur <vbellur@xxxxxxxxxx> wrote:
Krutika has been working on several performance improvements for sharding and the results have been encouraging for virtual machine workloads.
Testing feedback would be very welcome!
I've managed to setup a replica 3 3.7.5 shard test volume, hosted using virtualised debian 8.2 servers, so performance is a bit crap :)
3 Nodes, gn1, hn2 & gn3
Each node has:
- 1GB RAM
- 1GB Ethernet
- 512 GB disk hosted on a ZFS External USB Drive :)
- Datastore is shared out via NFS to the main cluster for running a VM
- I have the datastore mounted using glusterfs inside each test node so I can examine the data directly.
I've got two VM's running off it, one a 65GB (25GB sparse) Windows 7. I'be running bench marks and testing node failures by killing the cluster processes and killing actual nodes.
- Heal speed is immensely faster, a matter of minutes rather than hours.
- Read performance is quite good- I'll be upgrading my main cluster to jessie soon and will be able to test with real hardware and bonded connections, plus using gfapi direct. Then I'll be able to do real benchmarks.
One Bug:
After heals completed I shut down the VM's and run a MD5SUM on the VM image (via glusterfs) on each nodes. They all matched except for one time on gn3. Once I unmounted/remounted the datastore on gn3 the md5sum matched.Questions:
- I'd be interested to know how the shard are organsied and accessed - it looks like 1000's of 4mb files in the .shard directory, I'm concerned access times will go in the toilet once many large VM images are stored on the volume.--
Lindsay
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-users