On Wednesday, January 14, 2015, Nick Fisk <nick@xxxxxxxxxx> wrote:
Hi Giuseppe,
I am working on something very similar at the moment. I currently have it working on some test hardware but seems to be working reasonably well.
I say reasonably as I have had a few instability’s but these are on the HA side, the LIO and RBD side of things have been rock solid so far. The main problems I have had seem to be around recovering from failure with resources ending up in a unmanaged state. I’m not currently using fencing so this may be part of the cause.
As a brief description of my configuration.
4 Hosts each having 2 OSD’s also running the monitor role
3 additional host in a HA cluster which act as iSCSI proxy nodes.
I’m using the IP, RBD, iSCSITarget and iSCSILUN resource agents to provide HA iSCSI LUN which maps back to a RBD. All the agents for each RBD are in a group so they follow each other between hosts.
I’m using 1 LUN per target as I read somewhere there are stability problems using more than 1 LUN per target.
Performance seems ok, I can get about 1.2k random IO’s out the iSCSI LUN. These seems to be about right for the Ceph cluster size, so I don’t think the LIO part is causing any significant overhead.
We should be getting our production hardware shortly which wil have 40 OSD’s with journals and a SSD caching tier, so within the next month or so I will have a better idea of running it in a production environment and the performance of the system.
Hope that helps, if you have any questions, please let me know.
Nick
From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of Giuseppe Civitella
Sent: 13 January 2015 11:23
To: ceph-users
Subject: [ceph-users] Ceph, LIO, VMWARE anyone?
Hi all,
I'm working on a lab setup regarding Ceph serving rbd images as ISCSI datastores to VMWARE via a LIO box. Is there someone that already did something similar wanting to share some knowledge? Any production deployments? What about LIO's HA and luns' performances?
Thanks
Giuseppe
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com