Re: Expected performane with Ceph iSCSI gateway

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Since the iSCSI protocol adds an extra hop and offers an added layer
of complexity, it should be expected to perform slightly worse as
compared to a direct-path solution like krbd.  The RBD iSCSI interface
is really a workaround for environments that cannot directly access
the Ceph cluster via krbd / librbd.

On Mon, May 28, 2018 at 8:29 AM, Frank (lists) <lists@xxxxxxxxxxx> wrote:
> Hi,
>
> I an test cluster (3 nodes, 24 osd's) I'm testing the ceph iscsi gateway
> (with http://docs.ceph.com/docs/master/rbd/iscsi-targets/). For a client I
> used a seperate server, everything runs Centos 7.5. The iscsi gateway are
> located on 2 of the existing nodes in the cluster.
>
> How does iscsi perform compared to krbd? I've already did some benchmarking,
> but it didn't performed any near what krbd is doing. krbd easily saturates
> the public netwerk, iscsi about 75%. Tmcu-runner is running during a
> benchmark at a load of 50 to 75% on the (owner)target
>
>
> Regards,
>
> Frank de Bot
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



-- 
Jason
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux