Shard Volume testing (3.7.5)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 18 October 2015 at 00:17, Vijay Bellur <vbellur@xxxxxxxxxx> wrote:
Krutika has been working on several performance improvements for sharding and the results have been encouraging for virtual machine workloads.

Testing feedback would be very welcome!

I've managed to setup a replica 3 3.7.5 shard test volume, hosted using virtualised debian 8.2 servers, so performance is a bit crap :)

3 Nodes, gn1, hn2 & gn3
Each node has:
- 1GB RAM
- 1GB Ethernet
- 512 GB disk hosted on a ZFS External USB Drive :)

- Datastore is shared out via NFS to the main cluster for running a VM
- I have the datastore mounted using glusterfs inside each test node so I can examine the data directly.



I've got two VM's running off it, one a 65GB (25GB sparse) Windows 7. I'be running bench marks and testing node failures by killing the cluster processes and killing actual nodes.

- Heal speed is immensely faster, a matter of minutes rather than hours.
- Read performance is quite good
- Write performance is atrocious, but given the limited resources not unexpected.
- I'll be upgrading my main cluster to jessie soon and will be able to test with real hardware and bonded connections, plus using gfapi direct. Then I'll be able to do real benchmarks.

One Bug:
After heals completed I shut down the VM's and run a MD5SUM on the VM image (via glusterfs) on each nodes. They all matched except for one time on gn3. Once I unmounted/remounted the datastore on gn3 the md5sum matched.

One Oddity:
gluster volume heals datastore info *always* shows a split brain on the directory, but it always heals without intervention. Dunno if this is normal on not.

Questions:
- I'd be interested to know how the shard are organsied and accessed - it looks like 1000's of 4mb files in the .shard directory, I'm concerned access times will go in the toilet once many large VM images are stored on the volume.

- Is it worth experimenting with different shard sizes?

- Anything you'd like me to test?

Thanks,


--
Lindsay
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux