CephFS and single threaded RBD read performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I see CephFS read performance a bit lower than RBD sequential read single
threaded performance.
Is it an expected behaviour?
Is file access with CephFS single threaded by design?

fio shows 70 MB/s seq read with 4M blocks, libaio, 1 thread, direct.
fio seq write 200 MB/s

# rados bench -t 1 -p test 60 write --no-cleanup
...
Bandwidth (MB/sec):     164.247
Stddev Bandwidth:       28.9474
Average Latency:        0.0243512
Stddev Latency:         0.0144412

# <drop_caches on OSD nodes>

# rados bench -t 1 -p test 60 seq
...
Bandwidth (MB/sec):    88.174
Average Latency:       0.0453621

On the other hand, 'rados bench -t 128 60 seq' shows about 1700 MB/s.

Is there anything to be tuned?

Hardware is 3 nodes x 24 HDDs with journals on SSDs, 2x10GbE
Triple replication.
cephfs clients: kernel and ceph-fuse on Fedora 23


--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux