Re: Ceph iSCSI Performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Dominic,


If you can't use kernel rbd I think you'll probably have to deal with the higher overhead and lower performance with the tcmu solution.  It's possible there might be some things you can tweak at the tcmu layer that will improve things, but when I looked at it there simply seemed to be a lot of extra work being done to do the translation. YMMV.


Mark


On 10/6/20 12:49 PM, DHilsbos@xxxxxxxxxxxxxx wrote:
Mark;

Are you suggesting some other means to configure iSCSI targets with Ceph?

If so, how do configure for non-tcmu?

The iSCSI clients are not RBD aware, and I can't really make them RBD aware.

Thank you,

Dominic L. Hilsbos, MBA
Director – Information Technology
Perform Air International Inc.
DHilsbos@xxxxxxxxxxxxxx
www.PerformAir.com



-----Original Message-----
From: Mark Nelson [mailto:mnelson@xxxxxxxxxx]
Sent: Monday, October 5, 2020 3:40 PM
To: ceph-users@xxxxxxx
Subject:  Re: Ceph iSCSI Performance

I don't have super recent results, but we do have some test data from
last year looking at kernel rbd, rbd-nbd, rbd+tcmu, fuse, etc:


https://docs.google.com/spreadsheets/d/1oJZ036QDbJQgv2gXts1oKKhMOKXrOI2XLTkvlsl9bUs/edit?usp=sharing


Generally speaking going through the tcmu layer was slower than kernel
rbd or librbd directly (sometimes by quite a bit!).  There was also more
client side CPU usage per unit performance as well (which makes sense
since there's additional work being done).  You may be able to get some
of that performance back with more clients as I do remember there being
some issues with iodepth and tcmu. The only setup that I remember being
slower at the time though was rbd-fuse which I don't think is even
really maintained.


Mark


On 10/5/20 4:43 PM, DHilsbos@xxxxxxxxxxxxxx wrote:
All;

I've finally gotten around to setting up iSCSI gateways on my primary production cluster, and performance is terrible.

We're talking 1/4 to 1/3 of our current solution.

I see no evidence of network congestion on any involved network link.  I see no evidence CPU or memory being a problem on any involved server (MON / OSD / gateway /client).

What can I look at to tune this, preferably on the iSCSI gateways?

Thank you,

Dominic L. Hilsbos, MBA
Director - Information Technology
Perform Air International, Inc.
DHilsbos@xxxxxxxxxxxxxx
www.PerformAir.com

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux