Re: performance issues

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

first and foremost, do yourself and everybody else a favor by thoroughly
searching net and thus the ML archives.
This kind of question has come up and been answered countless times.


On Thu, 6 Apr 2017 09:59:10 +0800 PYH wrote:

> what I meant is, when the total IOPS reach to 3000+, the total cluster 
> gets very slow. so any idea? thanks.
> 
Gee whiz, that tends to happen when you push the borders of your
capacity/design.

> On 2017/4/6 9:51, PYH wrote:
> > Hi,
> > 
> > we have 21 hosts, each has 12 disks (4T sata), no SSD as journal or 
> > cache tier.
That's your problem right there, pure HDD setups will not produce good
IOPS.
 
> > so the total OSD number is 21x12=252.
> > there are three separate hosts for monitor nodes.
> > network is 10Gbps. replicas are 3.
> > 

252/3(replica)/2(journal on disk)= 42.
That's ignoring the journaling by the FS, the fact that writing RBD
objects isn't a sequential operation, etc.
If we (optimistically) assume 100 IOPS per HDD, that would give us
4200 IOPS in your case.

Factoring in everything else omitted up there, 3000 IOPS is pretty much
what I would expect.

> > under this setup, we can get only 3000+ IOPS for random writes for whole 
> > cluster.test method such as,
> > 
> > $ fio -name iops -rw=randwrite -bs=4k -runtime=60 -iodepth 64 -numjobs=2 
> > -filename /dev/rbd0 -ioengine libaio -direct=1
> >
You're testing the kernel client (which may or may not be worse than
librbd user space) here and a single client test like this (numjobs won't
help/change things) is also largely affected by latency, RTT issues.

Christian
 
> > it's much lower than my expect. Do you have any suggestions?
> > 
> > thanks.
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@xxxxxxxxxxxxxx
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com  
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 


-- 
Christian Balzer        Network/Systems Engineer                
chibi@xxxxxxx   	Global OnLine Japan/Rakuten Communications
http://www.gol.com/
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux