RE: Scaling Ceph reviews and testing

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> -----Original Message-----
> From: ceph-devel-owner@xxxxxxxxxxxxxxx [mailto:ceph-devel-
> owner@xxxxxxxxxxxxxxx] On Behalf Of Gregory Farnum
> Sent: Wednesday, November 25, 2015 11:14 PM
>
> It has been a long-standing requirement that all code be tested by
> teuthology before being merged to master. In the past leads have
> shouldered a lot of this burden through integration and testing branches, but
> it’s become unsustainable in present form: some PRs which are intended as
> RFCs are being mistakenly identified as final; some PRs are submitted which
> pass cursory sniff tests but fail under recovery conditions that the teuthology
> suites cover. To prevent that, please comment on exactly what testing
> you’ve performed when submitting a PR and a justification why that is
> sufficient to promote it to integration testing. [..]

Unless people will be convinced that performing their own testing isn't that complex (teuthology-openstack is a leapfrog in right direction), they won't do it, either because they simply don't know how to do it, or they don't have resources to do so (small startups may not afford them at all, and large, global corporations might have hardware request procedures so complex and with a such large time span, that it scares the devs out).
But correctness and reliability regressions are one thing, performance regressions are another one. I already see PRs that promise performance increase, when at (my) first glance it looks totally contradictory, or it's just a big, 100+ line change which adds more complexity than performance. Not to mention utter nonsense like https://github.com/ceph/ceph/pull/6582 (excuse my finger-pointing, but this case is so extreme that it needs to be pointed out). Or, to put it more bluntly, some folks are spamming with performance PRs that in their opinion improve something, while in reality those PRs at best increase complexity of already complex code and add (sometimes potential) bugs, often with added bonus of actually degraded performance. So, my proposition is to postpone QA'ing performance pull requests until someone unrelated to PR author (or even author's company) can confirm that claims in that particular PR are true. Providing code snippet that shows the perf difference (or provide a way to verify those claims in reproducible matter) in PR should be enough for it (https://github.com/XinzeChi/ceph/commit/2c8a17560a797b316520cb689240d4dcecf3e4cc for a particular example), and it should help get rid of performance PRs that degrade performance or improve it only on particular hardware/software configuration and at best don't improve anything otherwise.


With best regards / Pozdrawiam
Piotr Dałek
 
��.n��������+%������w��{.n����z��u���ܨ}���Ơz�j:+v�����w����ޙ��&�)ߡ�a����z�ޗ���ݢj��w�f




[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux