Fwd: Ceph perfomance issue!

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



What does ?very high load? mean for you?

???? 
S?bastien Han 
Cloud Engineer 

"Always give 100%. Unless you're giving blood.? 

Phone: +33 (0)1 49 70 99 72 
Mail: sebastien.han at enovance.com 
Address : 11 bis, rue Roqu?pine - 75008 Paris
Web : www.enovance.com - Twitter : @enovance 

Begin forwarded message:

> From: "Ben's Gmail" <hengfenghappy at gmail.com>
> Subject: Ceph perfomance issue!
> Date: 6 May 2014 11:25:17 GMT+2
> To: sebastien at enovance.com
> 
> Hi,Sebastien!
> 
>    I recently work with Openstack(Havana),and Ceph(v0.72.2 Emperor). I used ceph to store all glance images,cinder volumes and vms.I boot vms from images create a new volume.
> 
>   There are 5 servers in my ceph cluster .4 servers run osds,each server has 5 disks, one for system(RHEL6.4),3 for osd deamon ,the other one is ssd for juernal .1 server run ceph moniter.All
> the server is connent with a bond divice (2X10G NIC).I install ceph with ceph-deploy. and I change the crush map.
>   Here is my curshmap :
> # begin crush map
> tunable choose_local_tries 0
> tunable choose_local_fallback_tries 0
> tunable choose_total_tries 50
> tunable chooseleaf_descend_once 1
> 
> # devices
> device 0 osd.0
> device 1 osd.1
> device 2 osd.2
> device 3 osd.3
> device 4 osd.4
> device 5 osd.5
> device 6 osd.6
> device 7 osd.7
> device 8 osd.8
> device 9 osd.9
> device 10 osd.10
> device 11 osd.11
> 
> # types
> type 0 osd
> type 1 host
> type 2 rack
> type 3 row
> type 4 room
> type 5 datacenter
> type 6 root
> 
> # buckets
> host F7-Cloud25 {
>         id -2           # do not change unnecessarily
>         # weight 1.650
>         alg straw
>         hash 0  # rjenkins1
>         item osd.0 weight 0.550
>         item osd.1 weight 0.550
>         item osd.2 weight 0.550
> }
> host F7-Cloud26 {
>         id -3           # do not change unnecessarily
>         # weight 1.650
>         alg straw
>         hash 0  # rjenkins1
>         item osd.3 weight 0.550
>         item osd.4 weight 0.550
>         item osd.5 weight 0.550
> }
> host F7-cloud27 {
>         id -4           # do not change unnecessarily
>         # weight 1.650
>         alg straw
>         hash 0  # rjenkins1
>         item osd.7 weight 0.550
>         item osd.8 weight 0.550
>         item osd.6 weight 0.550
> }
> host F7-Cloud28 {
>         id -5           # do not change unnecessarily
>         # weight 1.650
>         alg straw
>         hash 0  # rjenkins1
>         item osd.10 weight 0.550
>         item osd.11 weight 0.550
>         item osd.9 weight 0.550
> }
> root default {
>         id -1           # do not change unnecessarily
>         # weight 6.600
>         alg straw
>         hash 0  # rjenkins1
>         item F7-Cloud25 weight 1.650
>         item F7-Cloud26 weight 1.650
>         item F7-cloud27 weight 1.650
>         item F7-Cloud28 weight 1.650
> }
> rack A {
>         id -6           # do not change unnecessarily
>         # weight 3.300
>         alg straw
>         hash 0  # rjenkins1
>         item F7-Cloud25 weight 1.650
>         item F7-Cloud26 weight 1.650
> }
> host F7-Cloud27 {
>         id -9           # do not change unnecessarily
>         # weight 0.000
>         alg straw
>         hash 0  # rjenkins1
> }
> rack B {
>         id -7           # do not change unnecessarily
>         # weight 3.300
>         alg straw
>         hash 0  # rjenkins1
>         item F7-cloud27 weight 1.650
>         item F7-Cloud28 weight 1.650
>         item F7-Cloud27 weight 0.000
> }
> root AB {
>         id -8           # do not change unnecessarily
>         # weight 6.600
>         alg straw
>         hash 0  # rjenkins1
>         item A weight 3.300
>         item B weight 3.300
> }
> 
> # rules
> rule data {
>         ruleset 0
>         type replicated
>         min_size 1
>         max_size 10
>         step take default
>         step chooseleaf firstn 0 type host
>         step emit
> }
> rule metadata {
>         ruleset 1
>         type replicated
>         min_size 1
>         max_size 10
>         step take default
>         step chooseleaf firstn 0 type host
>         step emit
> }
> rule rbd {
>         ruleset 2
>         type replicated
>         min_size 1
>         max_size 10
>         step take default
>         step chooseleaf firstn 0 type host
>         step emit
> }
> rule AB {
>         ruleset 3
>         type replicated
>         min_size 1
>         max_size 10
>         step take AB
>         step chooseleaf firstn 0 type rack
>         step emit
> }
> 
> # end crush map
> 
> 
> 
> ##################################
> 
> 
> 
>   I create a ceph-test pool use eth crush rule 3,and  test my ceph cluser with rados bench , I fount that only one osd host's load very high . I don't knew what to do .
> 
> 
> 
>     Thanks for your help!!!

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20140506/7ae1938e/attachment.htm>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 496 bytes
Desc: Message signed with OpenPGP using GPGMail
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20140506/7ae1938e/attachment.pgp>


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux