rbd-fuse performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

As mentioned in my previous emails, I'm extremely new to ceph, so please forgive my lack of knowledge.

I'm trying to find a good way to mount ceph rbd images for export by LIO/targetcli

rbd-nbd isn't good as it stops at 16 block devices (/dev/nbd0-15)

kernel rbd mapping doesn't have support for new features.

I thought rbd-fuse looked good, except write performance is abysmal.

rados bench gives me ~250MB/s of write speed. an image mounted with rbd-fuse gives me ~2MB/s of write speed. CephFS write speeds are good as well.

Is something wrong with my testing method or configuration?


root@stor-vm1:/# ceph osd pool create rbd_storage 128 128
root@stor-vm1:/# rbd create --pool=rbd_storage --size=25G rbd_25g1
root@stor-vm1:/# mkdir /mnt/rbd
root@stor-vm1:/# cd /mnt
root@stor-vm1:/# rbd-fuse rbd -p rbd_storage
root@stor-vm1:/# cd rbd
root@stor-vm1:/# dd if=/dev/zero of=rbd_25g1 bs=4M count=2 status=progress
8388608 bytes (8.4 MB, 8.0 MiB) copied, 4.37754 s, 1.9 MB/s
2+0 records in
2+0 records out
8388608 bytes (8.4 MB, 8.0 MiB) copied, 4.3776 s, 1.9 MB/s


rados bench:

root@stor-vm1:/mnt/rbd# rados bench -p rbd_storage 10 write
2017-06-27 18:56:59.505647 7fb9c24a7e00 -1 WARNING: the following dangerous and experimental features are enabled: bluestore
2017-06-27 18:56:59.505768 7fb9c24a7e00 -1 WARNING: the following dangerous and experimental features are enabled: bluestore
2017-06-27 18:56:59.507385 7fb9c24a7e00 -1 WARNING: the following dangerous and experimental features are enabled: bluestore
hints = 1
Maintaining 16 concurrent writes of 4194304 bytes to objects of size 4194304 for up to 10 seconds or 0 objects
Object prefix: benchmark_data_stor-vm1_8786
  sec Cur ops   started  finished  avg MB/s  cur MB/s last lat(s)  avg lat(s)
    0       0         0         0         0         0           -           0
    1      16        63        47   187.989       188    0.620617    0.285428
    2      16       134       118   235.976       284    0.195319    0.250789
    3      16       209       193   257.306       300    0.198448    0.239798
    4      16       282       266   265.972       292    0.232927    0.233386
    5      16       362       346   276.771       320    0.222398    0.226373
    6      16       429       413   275.303       268    0.193111    0.226703
    7      16       490       474   270.828       244   0.0879974    0.228776
    8      16       562       546    272.97       288    0.125843    0.230455
    9      16       625       609   270.637       252    0.145847    0.232388
   10      16       701       685    273.97       304    0.411055    0.230831
Total time run:         10.161789
Total writes made:      702
Write size:             4194304
Object size:            4194304
Bandwidth (MB/sec):     276.329
Stddev Bandwidth:       38.1925
Max bandwidth (MB/sec): 320
Min bandwidth (MB/sec): 188
Average IOPS:           69
Stddev IOPS:            9
Max IOPS:               80
Min IOPS:               47
Average Latency(s):     0.231391
Stddev Latency(s):      0.107305
Max latency(s):         0.774406
Min latency(s):         0.0828756
Cleaning up (deleting benchmark objects)
Removed 702 objects
Clean up completed and total clean up time :1.190687




Thanks,

Dan



_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux