Re: PCIE-SSD OSD bottom performance issue

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



my ceph.conf 
++++++++++++++++++++++++++++++++++++++++++++++++
[global]
auth_service_required = cephx
osd_pool_default_size = 2
filestore_xattr_use_omap = true
auth_client_required = cephx
auth_cluster_required = cephx
mon_host = 172.168.2.171
mon_initial_members = tds_node01
fsid = fef619c4-5f4a-4bf1-a787-6c4d17995ec4

keyvaluestore op threads = 4
osd op threads = 4
filestore op threads = 4
osd disk threads = 2
osd max write size = 180
osd agent max ops = 8

rbd readahead trigger requests = 20
rbd readahead max bytes = 1048576
rbd readahead disable after bytes = 104857600

[mon.ceph_node01]
host = ceph_node01
mon addr = 172.168.2.171:6789

[mon.ceph_node02]
host = ceph_node02
mon addr = 192.168.2.172:6789

[mon.ceph_node03]
host = ceph_node03
mon addr = 192.168.2.171:6789



[osd.0]
host = ceph_node03
deves = /dev/nvme0n1p5


[osd.1]
host = ceph_node04
deves = /dev/nvme0n1p5
++++++++++++++++++++++++++++++++++++++++++++++

Even if I didn't adjust thread  parameters, performance result is the same。



scott_tang86@xxxxxxxxx
 
From: Christian Balzer
Date: 2015-08-21 09:40
To: ceph-users
CC: scott_tang86@xxxxxxxxx; liuxy666
Subject: Re: PCIE-SSD OSD bottom performance issue
 
Hello,
 
On Thu, 20 Aug 2015 15:47:46 +0800 scott_tang86@xxxxxxxxx wrote:
 
The reason that you're not getting any replies is because we're not
psychic/telepathic/clairvoyant.
 
Meaning that you're not giving us enough information by far.
 
> dear ALL:
>     I used PCIE-SSD to OSD disk . But I found it very bottom
> performance. I have two hosts, each host 1 PCIE-SSD,so i create two osd
> by PCIE-SSD.
>
What PCIE-SDD?
What hosts (HW, OS), network?
What Ceph version, config changes?
 
> ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
> -1   0.35999     root default
> -2   0.17999             host tds_node03
> 0     0.17999                      osd.0 up 1.00000 1.00000
> -3    0.17999            host tds_node04
> 1     0.17999                      osd.1 up 1.00000 1.00000
>
> I create pool and rbd device.
What kind of pool, any non-default options?
Where did you mount/access that RBD device from, userspace, kernel?
What file system, if any?
 
> I use fio test 8K randrw(70%) in rbd device,the result is only 1W IOPS,
Exact fio invocation parameters, output please.
1W IOPS is supposed to mean 1 write IOPS?
Also for comparison purposes, the "standard" is to test with 4KB blocks
for random access
 
> I have tried many osd thread parameters, but not effect.
Unless your HW, SSD has issues defaults should give a lot better results
 
>But i tested 8K
> randrw(70%) in single PCIE-SSD, it has 10W IOPS.
>
10 write IOPS would still be abysmally slow.
Single means running fio against the SSD directly?
 
How does this compare to using the exact same setup but HDDs or normal
SSDs?
 
Christian
--
Christian Balzer        Network/Systems Engineer               
chibi@xxxxxxx   Global OnLine Japan/Fusion Communications
http://www.gol.com/
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux