Hello, On Thu, 20 Aug 2015 15:47:46 +0800 scott_tang86@xxxxxxxxx wrote: The reason that you're not getting any replies is because we're not psychic/telepathic/clairvoyant. Meaning that you're not giving us enough information by far. > dear ALL: > I used PCIE-SSD to OSD disk . But I found it very bottom > performance. I have two hosts, each host 1 PCIE-SSD,so i create two osd > by PCIE-SSD. > What PCIE-SDD? What hosts (HW, OS), network? What Ceph version, config changes? > ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY > -1 0.35999 root default > -2 0.17999 host tds_node03 > 0 0.17999 osd.0 up 1.00000 1.00000 > -3 0.17999 host tds_node04 > 1 0.17999 osd.1 up 1.00000 1.00000 > > I create pool and rbd device. What kind of pool, any non-default options? Where did you mount/access that RBD device from, userspace, kernel? What file system, if any? > I use fio test 8K randrw(70%) in rbd device,the result is only 1W IOPS, Exact fio invocation parameters, output please. 1W IOPS is supposed to mean 1 write IOPS? Also for comparison purposes, the "standard" is to test with 4KB blocks for random access > I have tried many osd thread parameters, but not effect. Unless your HW, SSD has issues defaults should give a lot better results >But i tested 8K > randrw(70%) in single PCIE-SSD, it has 10W IOPS. > 10 write IOPS would still be abysmally slow. Single means running fio against the SSD directly? How does this compare to using the exact same setup but HDDs or normal SSDs? Christian -- Christian Balzer Network/Systems Engineer chibi@xxxxxxx Global OnLine Japan/Fusion Communications http://www.gol.com/ _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com