Re: ceph/rados performace sync vs async

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



 Hi All,

Did more tests. Just one client, big object / small object, several clients with big and small objects - and seems like im getting absolutely reasonable numbers. Big objects are satturating network, small objects - IOPs on discs. Overall i have better understanding and im happy about results.

Thanks to everybody for the help.

On 18/07/2020 00:05, Daniel Mezentsev wrote:
Hi All,

 I started a small project related to metrics collection and processing, Ceph was chosen as a storage backend. Decided to use rados directly, to avoid any additional layers. I got a very simple client - it works fine, but performance is very low. Can't get more than 30-35MBsec. Rados bench shows 200MBsec for my test pool. Should be mentioned about the client - I'm using sbcl (yep lisp). Call to rados API is just cffi call.

 Did try in async mode. Wow ! Saturated network bandwidth for large objects (4Mb and bigger), for small objects - saturated OSD IOPS ~2.4KIOPS for 8 SAS disks, so ~300 IOPS per disk - that sounds pretty reasonable. Bottom line - issue not with the lisp client - im getting close to C performance, difference is sync vs async IO.

 Why it's so big - sync operations are approx 2-3 times slower then async .
 Daniel Mezentsev, founder
(+1) 604 313 8592.
Soleks Data Group.
Shaping the clouds.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

What do you get in your rados benchmark result if you add  -t 1 to indicate 1 thread/queue depth to make it similar to your simple client, the default is 16. You should also either use 4M block size in your client or adjust the rados bench block size with -b

/Maged

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxxxx unsubscribe send an email to ceph-users-leave@xxxxxxx
 Daniel Mezentsev, founder
(+1) 604 313 8592.
Soleks Data Group.
Shaping the clouds.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux