PCIE-SSD OSD bottom performance issue

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



dear Loic:
    I'm sorry to bother you.But I have a question about ceph.
    I used PCIE-SSD to OSD disk . But I found it very bottom performance. 
    I have two hosts, each host 1 PCIE-SSD,so i create two osd by PCIE-SSD.

ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY 
-1   0.35999     root default 
-2   0.17999             host tds_node03 
0     0.17999                      osd.0 up 1.00000 1.00000 
-3    0.17999            host tds_node04 
1     0.17999                      osd.1 up 1.00000 1.00000 

I create pool and rbd device.
I use fio test 8K randrw(70%) in rbd device,the result is only 1W IOPS, I have tried many osd thread parameters, but not effect.
But i tested 8K randrw(70%) in single PCIE-SSD, it has 10W IOPS.

Is there any way to improve the PCIE-SSD  OSD performance?


scott_tang86@xxxxxxxxx
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux