poor rbd performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Im seeing some disappointing performance numbers using the bs_rbd backend
with a Ceph RBD pool backend over a 10GB Ethernet link.

Read operations appear to max out at about 100MB/second, regardless of
block size or amount of data being read and write operations fare much
worse, maxing out somewhere in the 40MB/second range.   Any ideas why this
would be so limited?

I've tested using 'fio' as well as some other perf testing utilities.  On
the same link, talking to the same ceph pool/image, using librados directly
(either through the C or Python bindings), the read performance is 5-8x
faster and write performance is 2-3x faster.

Any suggestions as to how to tune the iSCSI or bs_rbd interface to perform
better?

thanks,
  Wyllys Ingersoll
--
To unsubscribe from this list: send the line "unsubscribe stgt" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux SCSI]     [Linux RAID]     [Linux Clusters]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]

  Powered by Linux