I don't have much of the details (our engineering group handled most of the testing), however we currently have 10 Dell PowerEdge R720xd systems, each with 24 600GB 10k SAS OSDs (the system has a RAID controller with 2GB NVRAM, in testing performance was better with this then with 6 SSD drives for journals). The cluster is configured with public/private networks, both on 10GbE networks. The NAS systems (there are 2 in Active/Passive mode) are connected to the 10GbE public network, along with the VMware hypervisor nodes. Performance is acceptable (nothing earth shattering, latency can be a concern during peak I/O periods, particularly backups) but we have a relatively small VMware environment, primarily for legacy application systems that either aren't supported or we're afraid to move to our larger private cloud infrastructure (which also uses Ceph, but direct access with QEMU+KVM). The iSCSI testing was about 2 years ago, I believe testing was done against Cuttlefish and we were using tgtd for the target. I'm sure there have been enhancements in both stability and performance since then, we've just not gotten around to evaluating or changing it, as what we have is working well for us (we have mixed workloads, but generally hover around 500-800 active IOPS during the day, with peaks to 2-3k during off-hour maintenance times). We've been running for about 1.5 years with this setup, and no major issues.
From: "Nikhil Mitra (nikmitra)" <nikmitra@xxxxxxxxx>
To: "Bill Campbell" <bcampbell@xxxxxxxxxxxxxxxxxxxx>
Cc: ceph-users@xxxxxxxxxxxxxx
Sent: Monday, July 20, 2015 3:05:25 PM
Subject: Re: CEPH RBD with ESXi
NOTICE: Protect the information in this message in accordance with the company's security policies. If you received this message in error, immediately notify the sender and destroy all copies.
To: "Bill Campbell" <bcampbell@xxxxxxxxxxxxxxxxxxxx>
Cc: ceph-users@xxxxxxxxxxxxxx
Sent: Monday, July 20, 2015 3:05:25 PM
Subject: Re: CEPH RBD with ESXi
Hi Bill,
Would you be kind enough to share how your setup looks like today as we are planning to use ESXi back-ended with CEPH storage. When you tested iSCSI what were the issues you noticed ? What version of CEPH were you running then ? What iSCSI software did
you use for setup ?
Regards,
Nikhil Mitra
From: "Campbell, Bill" <bcampbell@xxxxxxxxxxxxxxxxxxxx>
Reply-To: "Campbell, Bill" <bcampbell@xxxxxxxxxxxxxxxxxxxx>
Date: Monday, July 20, 2015 at 11:52 AM
To: Nikhil Mitra <nikmitra@xxxxxxxxx>
Cc: "ceph-users@xxxxxxxxxxxxxx" <ceph-users@xxxxxxxxxxxxxx>
Subject: Re: CEPH RBD with ESXi
Reply-To: "Campbell, Bill" <bcampbell@xxxxxxxxxxxxxxxxxxxx>
Date: Monday, July 20, 2015 at 11:52 AM
To: Nikhil Mitra <nikmitra@xxxxxxxxx>
Cc: "ceph-users@xxxxxxxxxxxxxx" <ceph-users@xxxxxxxxxxxxxx>
Subject: Re: CEPH RBD with ESXi
We use VMware with Ceph, however we don't use RBD directly (we have an NFS server which has RBD volumes exported as datastores in VMware). We did attempt iSCSI with RBD to connect to VMware but ran into stability issues (could have been the target software
we were using) but have found NFS to be pretty reliable.
From: "Nikhil Mitra (nikmitra)" <nikmitra@xxxxxxxxx>
To: ceph-users@xxxxxxxxxxxxxx
Sent: Monday, July 20, 2015 2:07:13 PM
Subject: CEPH RBD with ESXi
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
To: ceph-users@xxxxxxxxxxxxxx
Sent: Monday, July 20, 2015 2:07:13 PM
Subject: CEPH RBD with ESXi
Hi,
Has anyone implemented using CEPH RBD with Vmware ESXi hypervisor. Just looking to use it as a native VMFS datastore to host VMDK’s. Please let me know if there are any documents out there that might point me in the right direction to get started on this.
Thank you.
Regards,
Nikhil Mitra
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
NOTICE: Protect the information in this message in accordance with the company's security policies. If you received this message in error, immediately notify the sender and destroy all copies.
NOTICE: Protect the information in this message in accordance with the company's security policies. If you received this message in error, immediately notify the sender and destroy all copies.
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com