Re: Significant slowdown of osds since v0.67 Dumpling

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



When you were running with the wbthrottle on, did you have the
settings I gave you earlier set, or was it using the defaults?
-Sam

On Fri, Aug 23, 2013 at 12:48 PM, Oliver Daudey <oliver@xxxxxxxxx> wrote:
> Hey Samuel,
>
> That changed something, for the better. :-)
>
> Your test-version, with wbthrottle off:
> # ceph-osd --version
> ceph version 0.67.1-18-g3fe3368
> (3fe3368ac7178dcd312e89d264d8d81307e582d8)
> # ceph --admin-daemon /var/run/ceph/ceph-osd.1.asok config show | grep
> wbthrottle_enable
>   "filestore_wbthrottle_enable": "false",
> # rbd bench-write test --io-pattern=rand
> bench-write  io_size 4096 io_threads 16 bytes 1073741824 pattern rand
>   SEC       OPS   OPS/SEC   BYTES/SEC
>     1       838    837.36  2283825.44
>     2      1679    827.66  2309860.21
>     3      2488    829.30  2320967.64
>     4      2812    655.55  1847713.58
>     5      3301    659.64  1849006.69
> <...>
>   120     54113    450.83  1271297.14
>   121     54847    453.13  1277957.21
>   122     55502    454.93  1281769.31
>   123     55768    451.09  1271512.03
>   124     55853    450.42  1269452.15
>   126     55889    442.22  1246494.89
>   127     56351    443.71  1250308.66
>   128     57135    446.36  1257746.91
>   129     57445    444.03  1251566.31
>   130     57944    445.72  1256426.99
>
> Now with the wbthrottle-option enabled:
> # ceph-osd --version
> ceph version 0.67.1-18-g3fe3368
> (3fe3368ac7178dcd312e89d264d8d81307e582d8)
> # ceph --admin-daemon /var/run/ceph/ceph-osd.1.asok config show | grep
> wbthrottle_enable
>   "filestore_wbthrottle_enable": "true",
> # rbd bench-write test --io-pattern=randbench-write  io_size 4096
> io_threads 16 bytes 1073741824 pattern rand
>   SEC       OPS   OPS/SEC   BYTES/SEC
>     1       822    821.23  2303875.45
>     2      1510    738.54  2021390.37
>     3      2224    740.29  1989216.54
>     4      2843    708.01  1912598.24
>     5      3294    652.35  1774048.26
> <...>
>   120     59470    495.58  1388776.41
>   122     60067    489.29  1370954.96
>   123     60083    487.74  1366434.49
>   124     60449    487.49  1366477.77
>   125     61122    488.97  1370480.73
>   126     61679    489.52  1372290.03
>   127     62195    489.69  1372523.64
>   128     62608    489.02  1370226.98
>   129     62655    485.68  1360719.66
>   130     62688    482.01  1350560.76
>
> Slightly faster than with wbthrottle disabled..  For comparison, one
> more time with Cuttlefish-OSDs:
> # ceph-osd --version
> ceph version 0.61.7 (8f010aff684e820ecc837c25ac77c7a05d7191ff)
> # rbd bench-write test --io-pattern=rand
> bench-write  io_size 4096 io_threads 16 bytes 1073741824 pattern rand
>   SEC       OPS   OPS/SEC   BYTES/SEC
>     1      1074   1073.80  3079631.51
>     2      2121   1060.21  3001542.58
>     3      3111    845.59  2422586.94
>     4      3132    745.40  2132915.48
>     5      3336    665.33  1904205.15
> <...>
>   120     63700    530.81  1530767.77
>   121     64781    535.37  1544861.37
>   122     64944    532.10  1535398.54
>   123     65312    527.70  1522971.49
>   124     65526    528.42  1525134.71
>   125     66160    529.12  1527158.94
>   126     67214    533.44  1539955.47
>   127     67855    534.29  1541965.13
>   128     67939    528.79  1526270.55
>   129     68178    528.44  1525239.57
>   130     68811    529.24  1527681.84
>
> Cuttlefish still wins, but by a much smaller margin now.  Looks like
> we're onto something.  The fdatasync seems to be the key here, rather
> than disabling wbthrottle.
>
>
>    Regards,
>
>       Oliver
>
> On 23-08-13 19:53, Samuel Just wrote:
>> I pushed a branch, wip-dumpling-perf.  It does two things:
>> 1) adds a config filestore_wbthrottle_enable (defaults to true) to
>> allow you to disable the wbthrottle altogether
>> 2) causes the wbthrottle when enabled to fdatasync rather than fsync.
>>
>> Can you rerun the random workload with that branch with
>> filestore_wbthrottle_enable on and then off?
>> -Sam
>>
>> On Fri, Aug 23, 2013 at 10:26 AM, Oliver Daudey <oliver@xxxxxxxxx>
> wrote:
>>> Hey Sage,
>>>
>>> I'm all for it and will help testing.
>>>
>>>
>>>    Regards,
>>>
>>>       Oliver
>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux