Re: iSCSI over RDB is a good idea ?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> We are using SCST over RBD and not seeing much of a degradation...Need to make sure you tune SCST properly and use multiple session..

Sure. My post was not intended to say that iSCSI over RBD is *slow*, just that it scales differently than native RBD client access.

If I have 10 OSD hosts with a 10G link each facing clients, provided the OSDs can saturate the 10G links, I have 100G of aggregate nominal throughput under ideal conditions. If I put an iSCSI target (or an active/passive pair of targets) in front of that to connect iSCSI initiators to RBD devices, my aggregate nominal throughput for iSCSI clients under ideal conditions is 10G.

If you don't flat-top that, then it should perform just fine and the only hit should be the slight (possibly insignificant, depending on hardware and layout) latency bump from the extra hop.

Don't get me wrong: I'm not trying to knock iSCSI over RBD at all. It's a perfectly legitimate and solid setup for connecting RBD-unaware clients into RBD storage. My intention was just to point out the difference in architecture and that sizing of the target hosts is a consideration that's different from a pure RBD environment.

Though, I suppose if network utilization at the targets becomes an issue at any point, you could scale out with additional targets and balance the iSCSI clients across them.
--
Hugo
hugo@xxxxxxxxxxx: email, xmpp/jabber
also on Signal

---- From: Somnath Roy <Somnath.Roy@xxxxxxxxxxx> -- Sent: 2015-11-04 - 13:48 ----

> We are using SCST over RBD and not seeing much of a degradation...Need to make sure you tune SCST properly and use multiple session..
>
> Thanks & Regards
> Somnath
>
> -----Original Message-----
> From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of Hugo Slabbert
> Sent: Wednesday, November 04, 2015 1:44 PM
> To: Jason Dillaman; Gaetan SLONGO
> Cc: ceph-users@xxxxxxxxxxxxxx
> Subject: Re:  iSCSI over RDB is a good idea ?
>
>> The disadvantage of the iSCSI design is that it adds an extra hop between your VMs and the backing Ceph cluster.
>
> ...and introduces a bottleneck. iSCSI initiators are "dumb" in comparison to native ceph/rbd clients. Whereas native clients will talk to all the relevant OSDs directly, iSCSI initiators will just talk to the target (unless there is some awesome magic in the RBD/tgt integration that I'm unaware of). So the targets and their connectivity are a bottleneck.
>
> --
> Hugo
> hugo@xxxxxxxxxxx: email, xmpp/jabber
> also on Signal
>
>


Attachment: signature.asc
Description: PGP/MIME digital signature

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux