Re: self healing with sharding

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 8/07/2016 9:40 PM, Gandalf Corvotempesta wrote:
How did you mesure the performance? I would like to test in the same
way, so that results are comparable.

Not particularity scientific. I have four main tests I run

1. CrystalDiskMark in a Windows VM. This lets me see IOPS as experienced by the VM. I'm suspicious of std disk becnhmarks though, they don't really reflect day-day usage.

2. The build server for our enterprise product, a fairly large cmd line build, a real world usage that exercises random read/writes fairly well.

3. Starting up and running std applications - eclipse, Office 365, outlook etc. More subjective, which does matter.

4.    Multiple simultaneous VM starts, a good stress test.


Which network/hardware/servers topology are you using ?

3 Compute Servers - Combined VM hosts and gluster nodes, for a replica 3 gluster volume

VNA:
- Dual Xeon E5-2660 2.2Ghz
- 64GB EEC Ram
- 2 * 1Gb Bond
- 4x3TB WD red in ZFS RAID10

VNB, VNG :
- Xeon E5-2620 2.0 Ghz
- 64GB Ram
- 3 * 1Gb Bond
- 4x3TB WD red in ZFS RAID10

All Bonds are LACP Balance-tcp with a dedicated Switch. VNA is supposed to have 3*1Gb as well but we had driver problems with the 3rd card and I haven't got round to fixing it :(

Internal & external traffic share the bond. External traffic is minimal.


--
Lindsay Mathieson

_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users



[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux