Re: Ceph iSCSI Performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



To be honest I don't really remember, those tests were from a while ago. :)  I'm guessing I probably was getting higher throughput with 32 vs 16 in some of the test cases but didn't need to go up to 64 at that time.  This was all before various work we've done in bluestore over the past year that's improved performance quite a bit in our rbd tests.


Mark


On 10/5/20 6:08 PM, Tecnología CHARNE.NET wrote:
Mark, Why do you use io_depth=32 in fio parameters?

Is there any reason for not choose 16 or 64?

Thanks in advance!


I don't have super recent results, but we do have some test data from last year looking at kernel rbd, rbd-nbd, rbd+tcmu, fuse, etc:


https://docs.google.com/spreadsheets/d/1oJZ036QDbJQgv2gXts1oKKhMOKXrOI2XLTkvlsl9bUs/edit?usp=sharing


Generally speaking going through the tcmu layer was slower than kernel rbd or librbd directly (sometimes by quite a bit!).  There was also more client side CPU usage per unit performance as well (which makes sense since there's additional work being done).  You may be able to get some of that performance back with more clients as I do remember there being some issues with iodepth and tcmu. The only setup that I remember being slower at the time though was rbd-fuse which I don't think is even really maintained.


Mark


On 10/5/20 4:43 PM, DHilsbos@xxxxxxxxxxxxxx wrote:
All;

I've finally gotten around to setting up iSCSI gateways on my primary production cluster, and performance is terrible.

We're talking 1/4 to 1/3 of our current solution.

I see no evidence of network congestion on any involved network link.  I see no evidence CPU or memory being a problem on any involved server (MON / OSD / gateway /client).

What can I look at to tune this, preferably on the iSCSI gateways?

Thank you,

Dominic L. Hilsbos, MBA
Director - Information Technology
Perform Air International, Inc.
DHilsbos@xxxxxxxxxxxxxx
www.PerformAir.com

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux