Re: Ceph + VMWare

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

maybe, in fact, a clean iscsi implementation would be better, because
more useable in general.

So the MS hyper-V people could use it too.

----

For me, when it comes to iSCSI ( we tested so far the tgtd module ), the
problem is at most on the reliability part when it comes to resilence in
case the ceph cluster changes from OK to what ever else.

So if the iSCSI implementation could receive some work, that even if PGs
are changing in the backfilling/degregated/... state, things will just
continue to work. Thats currently not the case.

Even more evil: the tgtd module currently seems not to support to have
ONE iSCSI target being mounted to MULTIPLE vmware esxi nodes.

So in fact you cant use it as shared storage because you receive very
fast readlocks which are never released and preventing other nodes from
using the same LUN.

-- 
Mit freundlichen Gruessen / Best regards

Oliver Dzombic
IP-Interactive

mailto:info@xxxxxxxxxxxxxxxxx

Anschrift:

IP Interactive UG ( haftungsbeschraenkt )
Zum Sonnenberg 1-3
63571 Gelnhausen

HRB 93402 beim Amtsgericht Hanau
Geschäftsführung: Oliver Dzombic

Steuer Nr.: 35 236 3622 1
UST ID: DE274086107


Am 06.10.2016 um 08:13 schrieb Daniel Schwager:
> Hi all,
> 
> we are using Ceph (jewel 10.2.2, 10GBit Ceph frontend/backend, 3 nodes, each 8 OSD's and 2 journal SSD's) 
> in out VMware environment especially for test environments and templates - but currently 
> not for productive machines (because of missing FC-redundancy & performance).
> 
> On our Linux based SCST 4GBit fiber channel proxy, 16 ceph-rbd  devices (non-caching, in total 10 TB) 
> creating a LVM (stripped) volume which is published as a FC-target to our VMware cluster. 
> Looks fine, works stable. But currently the proxy is not redundant (only one head).
> Performance is ok (a), but not that good than our IBM Storwize 3700 SAN (16 HDD's).
> Especially for small IO's (4k), the IBM is twice as fast as Ceph. 
> 
> Native ceph integration to VMware would be great (-:
> 
> Best regards
> Daniel
> 
> (a) Atto Benchmark screenshots - IBM Storwize 37000 vs. Ceph
> https://dtnet.storage.dtnetcloud.com/d/684b330eea/
> 
> -------------------------------------------------------------------
> DT Netsolution GmbH   -   Taläckerstr. 30    -    D-70437 Stuttgart
> Geschäftsführer: Daniel Schwager, Stefan Hörz - HRB Stuttgart 19870
> Tel: +49-711-849910-32, Fax: -932 - Mailto:daniel.schwager@xxxxxxxx
> 
>> -----Original Message-----
>> From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of Patrick McGarry
>> Sent: Wednesday, October 05, 2016 8:33 PM
>> To: Ceph-User; Ceph Devel
>> Subject:  Ceph + VMWare
>>
>> Hey guys,
>>
>> Starting to buckle down a bit in looking at how we can better set up
>> Ceph for VMWare integration, but I need a little info/help from you
>> folks.
>>
>> If you currently are using Ceph+VMWare, or are exploring the option,
>> I'd like some simple info from you:
>>
>> 1) Company
>> 2) Current deployment size
>> 3) Expected deployment growth
>> 4) Integration method (or desired method) ex: iscsi, native, etc
>>
>> Just casting the net so we know who is interested and might want to
>> help us shape and/or test things in the future if we can make it
>> better. Thanks.
>>
>>
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux