dear Loic:
I'm sorry to bother you.But I have a question about ceph.
I used PCIE-SSD to OSD disk . But I found it very bottom performance.
I have two hosts, each host 1 PCIE-SSD,so i create two osd by PCIE-SSD.
ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 0.35999 root default
-2 0.17999 host tds_node03
0 0.17999 osd.0 up 1.00000 1.00000
-3 0.17999 host tds_node04
1 0.17999 osd.1 up 1.00000 1.00000
-1 0.35999 root default
-2 0.17999 host tds_node03
0 0.17999 osd.0 up 1.00000 1.00000
-3 0.17999 host tds_node04
1 0.17999 osd.1 up 1.00000 1.00000
I create pool and rbd device.
I use fio test 8K randrw(70%) in rbd device,the result is only 1W IOPS, I have tried many osd thread parameters, but not effect.
But i tested 8K randrw(70%) in single PCIE-SSD, it has 10W IOPS.
Is there any way to improve the PCIE-SSD OSD performance?
scott_tang86@xxxxxxxxx
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com