On 2015-11-04T14:30:56, Hugo Slabbert <hugo@xxxxxxxxxxx> wrote: > Sure. My post was not intended to say that iSCSI over RBD is *slow*, just that it scales differently than native RBD client access. > > If I have 10 OSD hosts with a 10G link each facing clients, provided the OSDs can saturate the 10G links, I have 100G of aggregate nominal throughput under ideal conditions. If I put an iSCSI target (or an active/passive pair of targets) in front of that to connect iSCSI initiators to RBD devices, my aggregate nominal throughput for iSCSI clients under ideal conditions is 10G. It's worth noting that you can use multiple iSCSI target gateways using MPIO, which allows you to scale the performance and availability horizontally. This doesn't help with the additional network/gateway hop, but the bandwidth limitation is not the issue. And that works today. Regards, Lars -- Architect Storage/HA SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) "Experience is the name everyone gives to their mistakes." -- Oscar Wilde _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com