Re: iSCSI over RDB is a good idea ?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I am not sure of its status -- it looks like it was part of 3.6 planning but it recently was moved to 4.0 on the wiki.  There is a video walkthrough of the running integration from this past August [1].  You would need to just deploy Cinder and Keystone -- no need for all the other bits.  Again, it appears that oVirt might even have some development to containerize a small Cinder/Glance OpenStack setup [2].

[1] https://www.youtube.com/watch?v=elEkGfjLITs
[2] http://www.ovirt.org/CinderGlance_Docker_Integration

-- 

Jason Dillaman 
Red Hat Ceph Storage Engineering 
dillaman@xxxxxxxxxx 
http://www.redhat.com 


----- Original Message ----- 

> From: "Gaetan SLONGO" <gslongo@xxxxxxxxxxxxx>
> To: "Hugo Slabbert" <hugo@xxxxxxxxxxx>
> Cc: ceph-users@xxxxxxxxxxxxxx, "Somnath Roy" <Somnath.Roy@xxxxxxxxxxx>,
> "Jason Dillaman" <dillaman@xxxxxxxxxx>
> Sent: Thursday, November 5, 2015 2:37:16 AM
> Subject: Re:  iSCSI over RDB is a good idea ?

> Thank you everybody for your interesting answers.

> I saw the Cinder integration in oVirt. Did someone already done that ? I
> don't know OpenStack (yet). Is it possible to deploy the Cinder component
> only without the complete OpenStack setup ?

> Thanks !

> ----- Original Message -----

> De: "Hugo Slabbert" <hugo@xxxxxxxxxxx>
> À: "Somnath Roy" <Somnath.Roy@xxxxxxxxxxx>, "Jason Dillaman"
> <dillaman@xxxxxxxxxx>, "Gaetan SLONGO" <gslongo@xxxxxxxxxxxxx>
> Cc: ceph-users@xxxxxxxxxxxxxx
> Envoyé: Mercredi 4 Novembre 2015 23:30:56
> Objet: Re: RE:  iSCSI over RDB is a good idea ?

> > We are using SCST over RBD and not seeing much of a degradation...Need to
> > make sure you tune SCST properly and use multiple session..

> Sure. My post was not intended to say that iSCSI over RBD is *slow*, just
> that it scales differently than native RBD client access.

> If I have 10 OSD hosts with a 10G link each facing clients, provided the OSDs
> can saturate the 10G links, I have 100G of aggregate nominal throughput
> under ideal conditions. If I put an iSCSI target (or an active/passive pair
> of targets) in front of that to connect iSCSI initiators to RBD devices, my
> aggregate nominal throughput for iSCSI clients under ideal conditions is
> 10G.

> If you don't flat-top that, then it should perform just fine and the only hit
> should be the slight (possibly insignificant, depending on hardware and
> layout) latency bump from the extra hop.

> Don't get me wrong: I'm not trying to knock iSCSI over RBD at all. It's a
> perfectly legitimate and solid setup for connecting RBD-unaware clients into
> RBD storage. My intention was just to point out the difference in
> architecture and that sizing of the target hosts is a consideration that's
> different from a pure RBD environment.

> Though, I suppose if network utilization at the targets becomes an issue at
> any point, you could scale out with additional targets and balance the iSCSI
> clients across them.
> --
> Hugo
> hugo@xxxxxxxxxxx: email, xmpp/jabber
> also on Signal

> ---- From: Somnath Roy <Somnath.Roy@xxxxxxxxxxx> -- Sent: 2015-11-04 - 13:48
> ----

> > We are using SCST over RBD and not seeing much of a degradation...Need to
> > make sure you tune SCST properly and use multiple session..
> >
> > Thanks & Regards
> > Somnath
> >
> > -----Original Message-----
> > From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of
> > Hugo Slabbert
> > Sent: Wednesday, November 04, 2015 1:44 PM
> > To: Jason Dillaman; Gaetan SLONGO
> > Cc: ceph-users@xxxxxxxxxxxxxx
> > Subject: Re:  iSCSI over RDB is a good idea ?
> >
> >> The disadvantage of the iSCSI design is that it adds an extra hop between
> >> your VMs and the backing Ceph cluster.
> >
> > ...and introduces a bottleneck. iSCSI initiators are "dumb" in comparison
> > to native ceph/rbd clients. Whereas native clients will talk to all the
> > relevant OSDs directly, iSCSI initiators will just talk to the target
> > (unless there is some awesome magic in the RBD/tgt integration that I'm
> > unaware of). So the targets and their connectivity are a bottleneck.
> >
> > --
> > Hugo
> > hugo@xxxxxxxxxxx: email, xmpp/jabber
> > also on Signal
> >
> >

> --

> www.it-optics.com
> 
> Gaëtan SLONGO | IT & Project Manager
> Boulevard Initialis, 28 - 7000 Mons, BELGIUM
> Company : 	+32 (0)65 84 23 85
> Direct : 	+32 (0)65 32 85 88
> Fax : 	+32 (0)65 84 66 76
> GPG Key : 	gslongo-gpg_key.asc
> 

> - Please consider your environmental responsibility before printing this
> e-mail -
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux