Re: ceph + vmware

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




Hi Mike,

Thanks for the update on the RHCS iSCSI target.

Will RHCS 2.1 iSCSI target be compliant with VMWare ESXi client ? (or is it too early to say / announce).

Knowing that HA iSCSI target was on the roadmap, we chose iSCSI over NFS so we'll just have to remap RBDs to RHCS targets when it's available.

So we're currently running :

- 2 LIO iSCSI targets exporting the same RBD images. Each iSCSI target has all VAAI primitives enabled and run the same configuration. - RBD images are mapped on each target using the kernel client (so no RBD cache). - 6 ESXi. Each ESXi can access to the same LUNs through both targets, but in a failover manner so that each ESXi always access the same LUN through one target at a time. - LUNs are VMFS datastores and VAAI primitives are enabled client side (except UNMAP as per default).

Do you see anthing risky regarding this configuration ?

Would you recommend LIO or STGT (with rbd bs-type) target for ESXi clients ?

Best regards,

Frederic.

--

Frédéric Nass

Sous-direction Infrastructures
Direction du Numérique
Université de Lorraine

Tél : +33 3 72 74 11 35



Le 11/07/2016 17:45, Mike Christie a écrit :
On 07/08/2016 02:22 PM, Oliver Dzombic wrote:
Hi,

does anyone have experience how to connect vmware with ceph smart ?

iSCSI multipath does not really worked well.
Are you trying to export rbd images from multiple iscsi targets at the
same time or just one target?

For the HA/multiple target setup, I am working on this for Red Hat. We
plan to release it in RHEL 7.3/RHCS 2.1. SUSE ships something already as
someone mentioned.

We just got a large chunk of code in the upstream kernel (it is in the
block layer maintainer's tree for the next kernel) so it should be
simple to add COMPARE_AND_WRITE support now. We should be posting krbd
exclusive lock support in the next couple weeks.


NFS could be, but i think thats just too much layers in between to have
some useable performance.

Systems like ScaleIO have developed a vmware addon to talk with it.

Is there something similar out there for ceph ?

What are you using ?

Thank you !

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux