Re: Gluster on ZFS with Compression

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello Lindsay,

From personal experience: A two node volume can get you into trouble when one of the nodes goes down unexpectedly/crashes. At the very least, you should have an arbiter volume (introduced in Gluster 3.7) on a separate physical node.

We are running oVirt VM's on top of a two node Gluster cluster and a few months ago, I ended up transferring several terabytes from one node to the other because it was the fastest way to resolve the split-brain issues after a crash of Gluster on one of the nodes. In effect, the second node did not give us any redundancy, because the vm-images in split-brain would not be available for writes.

I don't think 4 GB is enough RAM, especially if you have a large L2ARC: every L2ARC entry needs an entry in ARC as well, which is always in RAM. RAM is relatively cheap nowadays, so go for at least 16 or 32.

You should also count the number of spindles you have and have it not exceed the number of VM's  you're running much to get decent disk IO performance. 

On 30 September 2015 at 07:00, Lindsay Mathieson <lindsay.mathieson@xxxxxxxxx> wrote:
I'm revisiting Gluster for the purpose of hosting Virtual Machine images (KVM). I was considering the following configuration

2 Nodes
- 1 Brick per node (replication = 2)
- 2 * 1GB Eth, LACP Bonded
- Bricks hosted on ZFS
- VM Images accessed via Block driver (gfapi)

ZFS Config:
- Raid 10
- SSD SLOG and L2ARC
- 4 GB RAM
 - Compression (lz4)

Does that seem like an sane layout?

Question: With the gfapi driver, does the vm image appear as a file on the host (zfs) file system?


Background: I currently have our VM's hosted on Ceph using a similar config as above, minus zfs. I've found that the performance for such a small setup is terrible, the maintenance headache is high and when a drive drops out, the performance gets *really* bad. Last time I checked, gluster was much slower at healing large files than ceph, I'm hoping that has improved :)

--
Lindsay

_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users



--
Tiemen Ruiten
Systems Engineer
R&D Media
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux