Re: ceph + vmware

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Mike,

i was trying:

https://ceph.com/dev-notes/adding-support-for-rbd-to-stgt/

ONE target, from different OSD servers directly, to multiple vmware esxi
servers.

A config looked like:

#cat iqn.ceph-cluster_netzlaboranten-storage.conf

<target iqn.ceph-cluster:vmware-storage>
driver iscsi
bs-type rbd
backing-store rbd/vmware-storage
initiator-address 10.0.0.9
initiator-address 10.0.0.10
incominguser vmwaren-storage RPb18P0xAqkAw4M1
</target>


We had 4 OSD servers. Everyone had this config running.
We had 2 vmware servers ( esxi ).

So we had 4 paths to this vmware-storage RBD object.

VMware, in the very end, had 8 paths ( 4 path's directly connected to
the specific vmware server ) + 4 paths this specific vmware servers saw
via the other vmware server ).

There were very big problems with performance. I am talking about < 10
MB/s. So the customer was not able to use it, so good old nfs is serving.

At that time we used ceph hammer, and i think esxi 5.5 the customer was
using, or maybe esxi 6, was somewhere last year the testing.

--------------------

We will make a new attempt now with ceph jewel and esxi 6 and this time
we will manage the vmware servers.

As soon as we fixed this

"ceph mon Segmentation fault after set crush_ruleset ceph 10.2.2"

what i already mailed here to the list is solved, we can start the testing.


-- 
Mit freundlichen Gruessen / Best regards

Oliver Dzombic
IP-Interactive

mailto:info@xxxxxxxxxxxxxxxxx

Anschrift:

IP Interactive UG ( haftungsbeschraenkt )
Zum Sonnenberg 1-3
63571 Gelnhausen

HRB 93402 beim Amtsgericht Hanau
Geschäftsführung: Oliver Dzombic

Steuer Nr.: 35 236 3622 1
UST ID: DE274086107


Am 11.07.2016 um 17:45 schrieb Mike Christie:
> On 07/08/2016 02:22 PM, Oliver Dzombic wrote:
>> Hi,
>>
>> does anyone have experience how to connect vmware with ceph smart ?
>>
>> iSCSI multipath does not really worked well.
> 
> Are you trying to export rbd images from multiple iscsi targets at the
> same time or just one target?
> 
> For the HA/multiple target setup, I am working on this for Red Hat. We
> plan to release it in RHEL 7.3/RHCS 2.1. SUSE ships something already as
> someone mentioned.
> 
> We just got a large chunk of code in the upstream kernel (it is in the
> block layer maintainer's tree for the next kernel) so it should be
> simple to add COMPARE_AND_WRITE support now. We should be posting krbd
> exclusive lock support in the next couple weeks.
> 
> 
>> NFS could be, but i think thats just too much layers in between to have
>> some useable performance.
>>
>> Systems like ScaleIO have developed a vmware addon to talk with it.
>>
>> Is there something similar out there for ceph ?
>>
>> What are you using ?
>>
>> Thank you !
>>
> 
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux