Re: ceph on ubuntu and centos

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



You might try
osd client message size cap = 26214400
osd client message cap = 25.

osd op threads = 8 and filestore op threads = 8 might also be good.
Let us know what you find!

Sounds like the the kernel is the most obvious candidate for slowness
on Centos 6.4, is there a 3.0+ kernel around for Centos 6.4 you could
try?
-Sam

On Mon, Oct 7, 2013 at 3:16 PM, hjwsm1989@xxxxxxxxx <hjwsm1989@xxxxxxxxx> wrote:
> thanks for your reply !
> the ubuntu is 3.8,centos kernel version is 2.6.32.
> which setting item should we change to get  the smoth write speed ?
> we tried tune  some parameters:
> osd op threads= 8
> filestore op threads = 8
> filestore max op queue = 30
> which one will have the largest effect on performance?
>
> thanks
>
> Samuel Just <sam.just@xxxxxxxxxxx>编写:
>
>>Interesting!  What kernel versions were running on the 13.10 and
>>centos 6.4 clusters?
>>-Sam
>>
>>On Fri, Oct 4, 2013 at 6:33 PM, huangyellowhuang
>><huangyellowhuang@xxxxxxx> wrote:
>>> Hi,all
>>> We test the ceph version 0.69 (6ca6f2f9f754031f4acdb971b71c92c9762e18c3) on
>>> Ubuntu server 13.10 and centos 6.4 final
>>> Our cluster configuration:
>>> 3 host machine, each runs 3 OSDs(use XFS as backend fs),MON and MDS runs on
>>> one of the three host,
>>> We have one KClient on Ubuntu server 13.10
>>>
>>> The cluster runs on Ubuntu works fine and a few ‘slow requests’ msgs, about
>>> 100MB/s write speed.
>>> But the cluster runs on centos is very bad, avg 30MB/s write speed, many osd
>>> requests slow:
>>> 2013-10-05 08:35:09.931145 mon.0 [INF] pgmap v928: 192 pgs: 192 active+clean
>>> ; 50873 MB data, 101716 MB used, 13857 GB / 13956 GB avail; 115 MB/s wr, 28
>>> op/s
>>> 2013-10-05 08:35:12.087614 mon.0 [INF] pgmap v929: 192 pgs: 192 active+clean
>>> ; 50901 MB data, 101780 MB used, 13857 GB / 13956 GB avail; 32593 KB/s wr, 8
>>>  op/s
>>> 2013-10-05 08:35:03.963979 osd.0 [WRN] 37 slow requests, 1 included below; o
>>> ldest blocked for > 798.235962 secs
>>> 2013-10-05 08:35:03.963984 osd.0 [WRN] slow request 240.831078 seconds old,
>>> received at 2013-10-05 08:31:03.132836: osd_op(mds.0.1:375 200.00000000 [wri
>>> tefull 0~84] 1.844f3494 e47) v4 currently no flag points reached
>>> 2013-10-05 08:35:08.965134 osd.0 [WRN] 37 slow requests, 1 included below; o
>>> ldest blocked for > 803.237127 secs
>>> 2013-10-05 08:35:08.965139 osd.0 [WRN] slow request 480.312618 seconds old,
>>> received at 2013-10-05 08:27:08.652461: osd_op(mds.0.1:307 200.00000000 [wri
>>> tefull 0~84] 1.844f3494 e47) v4 currently no flag points reached
>>> 2013-10-05 08:35:10.965619 osd.0 [WRN] 37 slow requests, 1 included below; o
>>> ldest blocked for > 805.237600 secs
>>> 2013-10-05 08:35:10.965624 osd.0 [WRN] slow request 120.946652 seconds old,
>>> received at 2013-10-05 08:33:10.018900: osd_op(mds.0.1:404 200.00000000 [wri
>>> tefull 0~84] 1.844f3494 e47) v4 currently no flag points reached
>>> 2013-10-05 08:35:11.965986 osd.0 [WRN] 37 slow requests, 1 included below; o
>>> ldest blocked for > 806.237800 secs
>>> 2013-10-05 08:35:11.965992 osd.0 [WRN] slow request 60.474314 seconds old, r
>>> eceived at 2013-10-05 08:34:11.491438: osd_op(mds.0.1:430 200.00000000 [writ
>>> efull 0~84] 1.844f3494 e47) v4 currently no flag points reached
>>>
>>> And we need to build a cluster have 4 hosts, each has 18 OSDs and 1 kclient,
>>> every kclient server as samba server that serves 4 samba clients.
>>> 1) Which linux distrubtion should we used? Centos or Ubuntu?
>>> 2) What results in ceph performances so large on different distribution?
>>> 3) It seems the bottleneck is underlayer fs can not handle requests fast as
>>> ceph expect, bc the ‘slow requests’ shows if a request does not handle after
>>> 30s.
>>>
>>> Thanks!
>>>
>>>
>>>
>>> --
>>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>>> the body of a message to majordomo@xxxxxxxxxxxxxxx
>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>--
>>To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>>the body of a message to majordomo@xxxxxxxxxxxxxxx
>>More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux