Re: Poor IOPS performance with Ceph

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



They may be more help to do something for performance analysis -;

  http://ceph.com/docs/master/start/hardware-recommendations/
  http://www.sebastien-han.fr/blog/2013/10/03/quick-analysis-of-the-ceph-io-layer/

Shinobu

----- Original Message -----
From: "Shinobu Kinjo" <skinjo@xxxxxxxxxx>
To: "Daleep Bais" <daleepbais@xxxxxxxxx>
Cc: "Ceph-User" <ceph-users@xxxxxxxx>
Sent: Wednesday, September 9, 2015 6:07:56 PM
Subject: Re:  Poor IOPS performance with Ceph

Are you using that hdd as also for storing journal data?
Or are you using ssd for that purpose?

Shinobu

----- Original Message -----
From: "Daleep Bais" <daleepbais@xxxxxxxxx>
To: "Shinobu Kinjo" <skinjo@xxxxxxxxxx>
Cc: "Ceph-User" <ceph-users@xxxxxxxx>
Sent: Wednesday, September 9, 2015 5:59:33 PM
Subject: Re:  Poor IOPS performance with Ceph

Hi Shinobu,

I have 1 X 1TB HDD on each node. The network bandwidth between nodes is
1Gbps.

Thanks for the info. I will also try to go through discussion mails related
to performance.

Thanks.

Daleep Singh Bais


On Wed, Sep 9, 2015 at 2:09 PM, Shinobu Kinjo <skinjo@xxxxxxxxxx> wrote:

> How many disks does each osd node have?
> How about networking layer?
> There are several factors to make your cluster much more stronger.
>
> Probably you may need to take a look at other discussion on this mailing
> list.
> There was a bunch of discussion about performance.
>
> Shinobu
>
> ----- Original Message -----
> From: "Daleep Bais" <daleepbais@xxxxxxxxx>
> To: "Ceph-User" <ceph-users@xxxxxxxx>
> Sent: Wednesday, September 9, 2015 5:17:48 PM
> Subject:  Poor IOPS performance with Ceph
>
> Hi,
>
> I have made a test ceph cluster of 6 OSD's and 03 MON. I am testing the
> read write performance for the test cluster and the read IOPS is poor.
> When I individually test it for each HDD, I get good performance, whereas,
> when I test it for ceph cluster, it is poor.
>
> Between nodes, using iperf, I get good bandwidth.
>
> My cluster info :
>
> root@ceph-node3:~# ceph --version
> ceph version 9.0.2-752-g64d37b7 (64d37b70a687eb63edf69a91196bb124651da210)
> root@ceph-node3:~# ceph -s
> cluster 9654468b-5c78-44b9-9711-4a7c4455c480
> health HEALTH_OK
> monmap e9: 3 mons at {ceph-node10=
> 192.168.1.210:6789/0,ceph-node17=192.168.1.217:6789/0,ceph-node3=192.168.1.203:6789/0
> }
> election epoch 442, quorum 0,1,2 ceph-node3,ceph-node10,ceph-node17
> osdmap e1850: 6 osds: 6 up, 6 in
> pgmap v17400: 256 pgs, 2 pools, 9274 MB data, 2330 objects
> 9624 MB used, 5384 GB / 5394 GB avail
> 256 active+clean
>
>
> I have mapped an RBD block device to client machine (Ubuntu 14) and from
> there, when I run tests using FIO, i get good write IOPS, however, read is
> poor comparatively.
>
> Write IOPS : 44618 approx
>
> Read IOPS : 7356 approx
>
> Pool replica - single
> pool 1 'test1' replicated size 1 min_size 1
>
> I have implemented rbd_readahead in my ceph conf file also.
> Any suggestions in this regard with help me..
>
> Thanks.
>
> Daleep Singh Bais
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux