On Thu, Jul 13, 2017 at 8:40 PM, Jeff King <peff@xxxxxxxx> wrote: > On Thu, Jul 13, 2017 at 11:29:10AM -0700, Junio C Hamano wrote: > >> > So then I think your config file primarily becomes about defining the >> > properties of each run. I'm not sure if it would look like what you're >> > starting on here or not. >> >> Yeah, I suspect that the final shape that defines the matrix might >> have to become quite a bit different. > > I think it would help if the perf code was split better into three > distinct bits: > > 1. A data-store capable of storing the run tuples along with their > outcomes for each test. > > 2. A "run" front-end that runs various profiles (based on config, > command-line options, etc) and writes the results to the data > store. > > 3. A flexible viewer which can slice and dice the contents of the data > store according to different parameters. > > We're almost there now. The "run" script actually does store results, > and you can view them via "aggregate.pl" without actually re-running the > tests. But the data store only indexes on one property: the tree that > was tested (and all of the other properties are ignored totally; you can > get some quite confusing results if you do a "./run" using say git.git > as your test repo, and then a followup with "linux.git"). Yeah I agree, but if possible I'd like to avoid working on the three different parts at the same time. I haven't thought much about how to improve the data store yet. I may have to look at that soon though. > I have to imagine that somebody else has written such a system already > that we could reuse. I don't know of one off-hand, but this is also not > an area where I've spent a lot of time. Actually about the viewer AEvar suggested having something like speed.python.org and speed.pypy.org which seem to be made using https://github.com/tobami/codespeed So unless something else is suggested, I plan to make it possible to import the results of the perf tests into codespeed, but I haven't looked at that much yet. > We're sort of drifting off topic from Christian's patches here. But if > we did have a third-party system, I suspect the interesting work would > be setting up profiles for the "run" tool to kick off. And we might be > stuck in such a case using whatever format the tool prefers. So having a > sense of what the final solution looks like might help us know whether > it makes sense to introduce a custom config format here. I don't think we should completely switch to a third-party system for everything. Though it would simplify my work if we decide to do that. I think people might want different viewers, so we should just make sure that we can easily massage the results from the run script, so that it will be easy to provide them as input to many different viewers. So we are pretty free to decide how we specify which tests should be performed on which revision, and I think a config file is the best way.