Hello Lindsay,
From personal experience: A two node volume can get you into trouble when one of the nodes goes down unexpectedly/crashes. At the very least, you should have an arbiter volume (introduced in Gluster 3.7) on a separate physical node.
We are running oVirt VM's on top of a two node Gluster cluster and a few months ago, I ended up transferring several terabytes from one node to the other because it was the fastest way to resolve the split-brain issues after a crash of Gluster on one of the nodes. In effect, the second node did not give us any redundancy, because the vm-images in split-brain would not be available for writes.
I don't think 4 GB is enough RAM, especially if you have a large L2ARC: every L2ARC entry needs an entry in ARC as well, which is always in RAM. RAM is relatively cheap nowadays, so go for at least 16 or 32.
You should also count the number of spindles you have and have it not exceed the number of VM's you're running much to get decent disk IO performance.
On 30 September 2015 at 07:00, Lindsay Mathieson <lindsay.mathieson@xxxxxxxxx> wrote:
- Compression (lz4)- 4 GB RAM- SSD SLOG and L2ARC- Raid 10- 1 Brick per node (replication = 2)I'm revisiting Gluster for the purpose of hosting Virtual Machine images (KVM). I was considering the following configuration2 Nodes- 2 * 1GB Eth, LACP Bonded- Bricks hosted on ZFS- VM Images accessed via Block driver (gfapi)
ZFS Config:Does that seem like an sane layout?Question: With the gfapi driver, does the vm image appear as a file on the host (zfs) file system?Background: I currently have our VM's hosted on Ceph using a similar config as above, minus zfs. I've found that the performance for such a small setup is terrible, the maintenance headache is high and when a drive drops out, the performance gets *really* bad. Last time I checked, gluster was much slower at healing large files than ceph, I'm hoping that has improved :)
--Lindsay
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users
Tiemen Ruiten
Systems Engineer
R&D Media
Systems Engineer
R&D Media
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-users