Hi Mark,
Just FYI. The performance regression in Pacific release prior to 16.2.6
could be also caused by a redundant deferred write usage. Which was
fixed by: https://tracker.ceph.com/issues/52244
Thanks,
Igor
On 2/23/2022 5:26 PM, Mark Nelson wrote:
4k randwrite Sweep 0 IOPS Sweep 1 IOPS Sweep 2 IOPS Notes
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
14.0.0 496381 491794 451793 Worst
14.2.0 638907 620820 596170 Big improvement
14.2.10 624802 565738 561014
14.2.16 628949 564055 523766
14.2.17 616004 550045 507945
14.2.22 711234 654464 614117 Huge start,
but degrades
15.0.0 636659 620931 583773
15.2.15 580792 574461 569541 No longer
degrades
15.2.15b 584496 577238 572176 Same
16.0.0 551112 550874 549273 Worse than
octopus? (Doesn't match prior Intel tests)
16.2.0 518326 515684 523475 Regression, doesn't
degrade
16.2.4 516891 519046 525918
16.2.6 585061 595001 595702 Big win,
doesn't degrade
16.2.7 597822 605107 603958 Same
16.2.7b 586469 600668 599763 Same
FWIW, we've also been running single OSD performance bisections:
https://gist.github.com/markhpc/fda29821d4fd079707ec366322662819
I believe at least one of the regressions may be related to
https://github.com/ceph/ceph/pull/29674
There are other things going on in other tests (large sequential
writes!) that are still being diagnosed.
Mark
--
Igor Fedotov
Ceph Lead Developer
Looking for help with your Ceph cluster? Contact us at https://croit.io
croit GmbH, Freseniusstr. 31h, 81247 Munich
CEO: Martin Verges - VAT-ID: DE310638492
Com. register: Amtsgericht Munich HRB 231263
Web: https://croit.io | YouTube: https://goo.gl/PGE1Bx
_______________________________________________
Dev mailing list -- dev@xxxxxxx
To unsubscribe send an email to dev-leave@xxxxxxx