Re: Pinpointing performance bottleneck / would SSD journals help?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

On Mon, 27 Jun 2016 21:35:35 +0100 Nick Fisk wrote:
[snip]
> 
> You need to run iostat on the OSD nodes themselves and see what the disks
> are doing. You stated that they are doing ~180iops per disk, which
> suggests they are highly saturated and likely to be the cause of the
> problem. I'm guessing you will also see really high queue depths per
> disk, which normally is the cause of high latency.
>
This.
Which bosun (never used it) should have showed you already if it's
worth its salt.

Running atop (large window) on your OSD nodes should give you a very clear
picture, too. 
Including network usage (unlikely to be your problem, but your 1Gb/s links
will hurt you latency wise).

I predict you'll see lots of red and near 100% utilization on your OSD
drives when your cluster is getting into trouble. 

> If you add SSD journals and a large amount of your IO is writes, then you
> may see an improvement. But you may also be at the point where you just
> need more disks to be able to provide the required performance.
> 
SSD journals will roughly double your IOPS and since you're at best going
to write around 400MB/s due to your network bandwidth you can get away
with using less/smaller SSDs.
Two 200GB DC S3610s or one 400GB DC S3710 would do the trick.

Past that point you need to grow the cluster (more OSDs, of course with
SSD journals) and/or consider cache-tiering.
The later can give you dramatic gains, but this very much depends on your
usage patterns and size of your hot data.

Christian
-- 
Christian Balzer        Network/Systems Engineer                
chibi@xxxxxxx   	Global OnLine Japan/Rakuten Communications
http://www.gol.com/
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux