Re: ceph + vmware

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 07/20/2016 03:50 AM, Frédéric Nass wrote:
> 
> Hi Mike,
> 
> Thanks for the update on the RHCS iSCSI target.
> 
> Will RHCS 2.1 iSCSI target be compliant with VMWare ESXi client ? (or is
> it too early to say / announce).

No HA support for sure. We are looking into non HA support though.

> 
> Knowing that HA iSCSI target was on the roadmap, we chose iSCSI over NFS
> so we'll just have to remap RBDs to RHCS targets when it's available.
> 
> So we're currently running :
> 
> - 2 LIO iSCSI targets exporting the same RBD images. Each iSCSI target
> has all VAAI primitives enabled and run the same configuration.
> - RBD images are mapped on each target using the kernel client (so no
> RBD cache).
> - 6 ESXi. Each ESXi can access to the same LUNs through both targets,
> but in a failover manner so that each ESXi always access the same LUN
> through one target at a time.
> - LUNs are VMFS datastores and VAAI primitives are enabled client side
> (except UNMAP as per default).
> 
> Do you see anthing risky regarding this configuration ?

If you use a application that uses scsi persistent reservations then you
could run into troubles, because some apps expect the reservation info
to be on the failover nodes as well as the active ones.

Depending on the how you do failover and the issue that caused the
failover, IO could be stuck on the old active node and cause data
corruption. If the initial active node looses its network connectivity
and you failover, you have to make sure that the initial active node is
fenced off and IO stuck on that node will never be executed. So do
something like add it to the ceph monitor blacklist and make sure IO on
that node is flushed and failed before unblacklisting it.


> 
> Would you recommend LIO or STGT (with rbd bs-type) target for ESXi
> clients ?

I can't say, because I have not used stgt with rbd bs-type support enough.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux