Re: Significant slowdown of osds since v0.67 Dumpling

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hey Samuel,

Ok, here are the results.

wip-dumpling-perf, filestore_op_threads = 1, wbthrottle on:
# rbd bench-write test --io-pattern=rand
bench-write  io_size 4096 io_threads 16 bytes 1073741824 pattern rand
  SEC       OPS   OPS/SEC   BYTES/SEC
    1       666    665.67  1948743.06
    2      1252    625.93  1736506.91
    3      1820    605.17  1676568.00
    4      2327    576.18  1621698.63
    5      2846    569.18  1595738.73
<...>
  120     41315    344.18  944401.84
  121     41542    342.44  939594.59
  122     41545    339.73  932188.17
  123     41871    340.41  934569.22
  124     42387    341.82  937823.62
  125     42928    343.42  942712.67
  126     43383    344.25  945449.19
  127     43673    341.91  939377.39
  128     43679    341.23  937538.98
  129     44092    341.64  939172.94
  130     44578    342.81  943122.90

A lot slower than before on that run!

wip-dumpling-perf, filestore_op_threads = 1, wbthrottle off:
# rbd bench-write test --io-pattern=rand
bench-write  io_size 4096 io_threads 16 bytes 1073741824 pattern rand
  SEC       OPS   OPS/SEC   BYTES/SEC
    1       639    612.08  1679243.28
    2      1206    602.94  1597289.06
    3      1727    575.08  1563077.66
    4      2015    499.77  1350135.09
    5      2557    508.57  1364569.72
<...>
  120     40120    334.25  909605.77
  121     40582    335.26  912615.10
  122     41113    336.99  917671.19
  123     41608    338.21  921584.95
  124     41799    336.25  916172.54
  125     41841    334.32  910952.43
  126     42200    334.91  912890.76
  127     42728    336.34  916466.27
  128     43261    337.98  921053.64
  129     43794    339.36  924781.72
  130     44201    337.85  920142.46

About the same results as with wbthrottle on.

But now, Cuttlefish 0.61.7, with filestore_op_threads = 1:
# rbd bench-write test --io-pattern=rand
bench-write  io_size 4096 io_threads 16 bytes 1073741824 pattern rand
  SEC       OPS   OPS/SEC   BYTES/SEC
    1       746    745.91  2010904.75
    2      1549    767.08  2129811.12
    3      1950    624.91  1741872.75
    4      2376    587.65  1670531.75
    5      2478    491.44  1398825.26
<...>
  120     53843    446.01  1260779.39
  121     54019    446.20  1261119.16
  122     54766    448.90  1267932.52
  123     55623    452.21  1278362.02
  125     56257    450.00  1272748.87
  126     56528    448.55  1267875.53
  127     56642    445.02  1257968.92
  128     57275    447.42  1264560.26
  129     58108    450.44  1274304.34
  130     58796    449.40  1271494.05

So, Cuttlefish is slower with 1 filestore_op_threads as well, but still
beats Dumpling hands-down.

PS: If I test with just `rbd bench-write test', the results remain
almost identical with or without all the options on wip-dumpling-perf.


   Regards,

     Oliver

On vr, 2013-08-23 at 13:55 -0700, Samuel Just wrote:
> Ok, can you try setting filestore_op_threads to 1 on both cuttlefish
> and wip-dumpling-perf (with and with wbthrottle, default wbthrottle
> settings).  I suspect I created contention in the filestore op threads
> (FileStore::lfn_open specifically), and if so setting it to only use 1
> thread should even out the performance.
> -Sam
> 
> On Fri, Aug 23, 2013 at 1:44 PM, Oliver Daudey <oliver@xxxxxxxxx> wrote:
> > Hey Samuel,
> >
> > I commented the earlier settings out, so it was with defaults.
> >
> >
> >    Regards,
> >
> >       Oliver
> >
> > On vr, 2013-08-23 at 13:35 -0700, Samuel Just wrote:
> >> When you were running with the wbthrottle on, did you have the
> >> settings I gave you earlier set, or was it using the defaults?
> >> -Sam
> >>
> >> On Fri, Aug 23, 2013 at 12:48 PM, Oliver Daudey <oliver@xxxxxxxxx> wrote:
> >> > Hey Samuel,
> >> >
> >> > That changed something, for the better. :-)
> >> >
> >> > Your test-version, with wbthrottle off:
> >> > # ceph-osd --version
> >> > ceph version 0.67.1-18-g3fe3368
> >> > (3fe3368ac7178dcd312e89d264d8d81307e582d8)
> >> > # ceph --admin-daemon /var/run/ceph/ceph-osd.1.asok config show | grep
> >> > wbthrottle_enable
> >> >   "filestore_wbthrottle_enable": "false",
> >> > # rbd bench-write test --io-pattern=rand
> >> > bench-write  io_size 4096 io_threads 16 bytes 1073741824 pattern rand
> >> >   SEC       OPS   OPS/SEC   BYTES/SEC
> >> >     1       838    837.36  2283825.44
> >> >     2      1679    827.66  2309860.21
> >> >     3      2488    829.30  2320967.64
> >> >     4      2812    655.55  1847713.58
> >> >     5      3301    659.64  1849006.69
> >> > <...>
> >> >   120     54113    450.83  1271297.14
> >> >   121     54847    453.13  1277957.21
> >> >   122     55502    454.93  1281769.31
> >> >   123     55768    451.09  1271512.03
> >> >   124     55853    450.42  1269452.15
> >> >   126     55889    442.22  1246494.89
> >> >   127     56351    443.71  1250308.66
> >> >   128     57135    446.36  1257746.91
> >> >   129     57445    444.03  1251566.31
> >> >   130     57944    445.72  1256426.99
> >> >
> >> > Now with the wbthrottle-option enabled:
> >> > # ceph-osd --version
> >> > ceph version 0.67.1-18-g3fe3368
> >> > (3fe3368ac7178dcd312e89d264d8d81307e582d8)
> >> > # ceph --admin-daemon /var/run/ceph/ceph-osd.1.asok config show | grep
> >> > wbthrottle_enable
> >> >   "filestore_wbthrottle_enable": "true",
> >> > # rbd bench-write test --io-pattern=randbench-write  io_size 4096
> >> > io_threads 16 bytes 1073741824 pattern rand
> >> >   SEC       OPS   OPS/SEC   BYTES/SEC
> >> >     1       822    821.23  2303875.45
> >> >     2      1510    738.54  2021390.37
> >> >     3      2224    740.29  1989216.54
> >> >     4      2843    708.01  1912598.24
> >> >     5      3294    652.35  1774048.26
> >> > <...>
> >> >   120     59470    495.58  1388776.41
> >> >   122     60067    489.29  1370954.96
> >> >   123     60083    487.74  1366434.49
> >> >   124     60449    487.49  1366477.77
> >> >   125     61122    488.97  1370480.73
> >> >   126     61679    489.52  1372290.03
> >> >   127     62195    489.69  1372523.64
> >> >   128     62608    489.02  1370226.98
> >> >   129     62655    485.68  1360719.66
> >> >   130     62688    482.01  1350560.76
> >> >
> >> > Slightly faster than with wbthrottle disabled..  For comparison, one
> >> > more time with Cuttlefish-OSDs:
> >> > # ceph-osd --version
> >> > ceph version 0.61.7 (8f010aff684e820ecc837c25ac77c7a05d7191ff)
> >> > # rbd bench-write test --io-pattern=rand
> >> > bench-write  io_size 4096 io_threads 16 bytes 1073741824 pattern rand
> >> >   SEC       OPS   OPS/SEC   BYTES/SEC
> >> >     1      1074   1073.80  3079631.51
> >> >     2      2121   1060.21  3001542.58
> >> >     3      3111    845.59  2422586.94
> >> >     4      3132    745.40  2132915.48
> >> >     5      3336    665.33  1904205.15
> >> > <...>
> >> >   120     63700    530.81  1530767.77
> >> >   121     64781    535.37  1544861.37
> >> >   122     64944    532.10  1535398.54
> >> >   123     65312    527.70  1522971.49
> >> >   124     65526    528.42  1525134.71
> >> >   125     66160    529.12  1527158.94
> >> >   126     67214    533.44  1539955.47
> >> >   127     67855    534.29  1541965.13
> >> >   128     67939    528.79  1526270.55
> >> >   129     68178    528.44  1525239.57
> >> >   130     68811    529.24  1527681.84
> >> >
> >> > Cuttlefish still wins, but by a much smaller margin now.  Looks like
> >> > we're onto something.  The fdatasync seems to be the key here, rather
> >> > than disabling wbthrottle.
> >> >
> >> >
> >> >    Regards,
> >> >
> >> >       Oliver
> >> >
> >> > On 23-08-13 19:53, Samuel Just wrote:
> >> >> I pushed a branch, wip-dumpling-perf.  It does two things:
> >> >> 1) adds a config filestore_wbthrottle_enable (defaults to true) to
> >> >> allow you to disable the wbthrottle altogether
> >> >> 2) causes the wbthrottle when enabled to fdatasync rather than fsync.
> >> >>
> >> >> Can you rerun the random workload with that branch with
> >> >> filestore_wbthrottle_enable on and then off?
> >> >> -Sam
> >> >>
> >> >> On Fri, Aug 23, 2013 at 10:26 AM, Oliver Daudey <oliver@xxxxxxxxx>
> >> > wrote:
> >> >>> Hey Sage,
> >> >>>
> >> >>> I'm all for it and will help testing.
> >> >>>
> >> >>>
> >> >>>    Regards,
> >> >>>
> >> >>>       Oliver
> >> >
> >>
> >
> >
> 


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux