Re: PCIE-SSD OSD bottom performance issue

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I running fio on raw device


scott_tang86@xxxxxxxxx
 
From: Wang, Warren
Date: 2015-08-23 12:27
To: scott_tang86@xxxxxxxxx; ceph-users
CC: liuxy666
Subject: RE: PCIE-SSD OSD bottom performance issue

Are you running fio against a sparse file, prepopulated file, or a raw device?

 

Warren

 

From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of scott_tang86@xxxxxxxxx
Sent: Thursday, August 20, 2015 3:48 AM
To: ceph-users <ceph-users@xxxxxxxxxxxxxx>
Cc: liuxy666 <liuxy666@xxxxxxxxx>
Subject: PCIE-SSD OSD bottom performance issue

 

dear ALL:

    I used PCIE-SSD to OSD disk . But I found it very bottom performance. 

    I have two hosts, each host 1 PCIE-SSD,so i create two osd by PCIE-SSD.

 

ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1   0.35999     root default
-2   0.17999             host tds_node03
0     0.17999                      osd.0 up 1.00000 1.00000
-3    0.17999            host tds_node04
1     0.17999                      osd.1 up 1.00000 1.00000 

 

I create pool and rbd device.

I use fio test 8K randrw(70%) in rbd device,the result is only 1W IOPS, I have tried many osd thread parameters, but not effect.

But i tested 8K randrw(70%) in single PCIE-SSD, it has 10W IOPS.

 

Is there any way to improve the PCIE-SSD  OSD performance?

 

 

 


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux