WBR,
We did some load testing for our setup, and the glusterfs
configuration we had, (four nodes distribute/replicate) couldn't
handle it. We didn't spend too much time trying to optimize the
setup because we aren't planning on using glusterfs at this time
because of it's inability to handle writes reliably when nodes are
down/restarted. If 20% of your IO operations are writes, you will
almost certainly get file corruption/split brain the first time you
reboot one of your servers for maintenance.
I would recommend waiting on the sidelines until 2.1 comes out.
Release 2.1 is when the dev's say they are implementing replication of
open files. This fix should hopefully solve a lot of problems people
are having. Oddly enough, this change isn't listed on the roadmap.
We will be re-evaluating glusterfs at that point and hope that it will
be a more viable solution at that time.
If you can set up your system in a read only mode where the only time
a file is ever opened for writing is when it is being copied to
cluster, then glusterfs might be something that is ready for you now.
Regards,
Brian
On Sep 17, 2009, at 2:43 AM, shellcode wrote:
Greetings,
My target:
Storage: 1 storage for now (for future 2 in stripe plus mirroring on
another 2 in stripe, "glusterfs raid 10" i mean) runing under
solaris 10 (ZFS);
Nodes: 4 webservers (Linux), mount one single volume ( /webapp for
example) in r/w from storage, apache+php;
Type of load:
Massive reads ~80%;
and ~20% writes, generally in _unique_ files (user sessions for
example, pictures, etc), and some concurrent writes in php scripts,
but i think ~5-10%.
My question: is glusterfs ready for this (type of load i mean) ?
I deploy tests servers, all work, but now, i can't create heavy load
for tests, maybe anyone have positive experience with same config ?
WBR
_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxx
http://lists.nongnu.org/mailman/listinfo/gluster-devel