Re: Ceph iSCSI Performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Anthony,


Not my area of expertise I'm afraid.  I did most of this testing when I was adding the "client endpoints" support to CBT so we could use the same fio benchmark code across the whole range of ceph block/fs clients.  One of the RBD guys might be able to answer your questions though!  If it were me, I think I would stick with kernel rbd if possible.  It generally performs well with lower overhead (though in our lab we see a ~3GB/s limit per kernel client that we haven't yet been able to explain.  librbd was able to operate at 6-8GB/s+).

Mark


On 10/5/20 5:55 PM, Anthony D'Atri wrote:
Thanks, Mark.

I’m interested as well, wanting to provide block service to baremetal hosts; iSCSI seems to be the classic way to do that.

I know there’s some work on MS Windows RBD code, but I’m uncertain if it’s production-worthy, and if RBD namespaces suffice for tenant isolation — and are themselves mature.

Thoughts anyone?


I don't have super recent results, but we do have some test data from last year looking at kernel rbd, rbd-nbd, rbd+tcmu, fuse, etc:


https://docs.google.com/spreadsheets/d/1oJZ036QDbJQgv2gXts1oKKhMOKXrOI2XLTkvlsl9bUs/edit?usp=sharing


Generally speaking going through the tcmu layer was slower than kernel rbd or librbd directly (sometimes by quite a bit!).  There was also more client side CPU usage per unit performance as well (which makes sense since there's additional work being done).  You may be able to get some of that performance back with more clients as I do remember there being some issues with iodepth and tcmu. The only setup that I remember being slower at the time though was rbd-fuse which I don't think is even really maintained.


Mark


On 10/5/20 4:43 PM, DHilsbos@xxxxxxxxxxxxxx wrote:
All;

I've finally gotten around to setting up iSCSI gateways on my primary production cluster, and performance is terrible.

We're talking 1/4 to 1/3 of our current solution.

I see no evidence of network congestion on any involved network link.  I see no evidence CPU or memory being a problem on any involved server (MON / OSD / gateway /client).

What can I look at to tune this, preferably on the iSCSI gateways?

Thank you,

Dominic L. Hilsbos, MBA
Director - Information Technology
Perform Air International, Inc.
DHilsbos@xxxxxxxxxxxxxx
www.PerformAir.com

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux