Adam Miller wrote: > I have read a lot of people voice their opinion on what they think to be > a flaw in the benchmark. How about we as a group put together a > documented benchmark process along with justification as to why those > methods were chosen to reflect "real world" scenarios and from there > send it to reviewers such as phoronix along with making it public on the > wiki or $other. > > I just think we could try and make an improvement for future reviews as > well as users who want to run benchmarks of their own. > > Just a thought, > -Adam I agree, because Phoronix will continue in any case. And they have set up a framework which is easy to run and report from, and they have quite a following it seems. It'd be in our (and open source's) best interest, I think, if we could help influence it in a positive direction. Having an easy to run, repeatable, -relevant- benchmark suite could actually help improve open source a lot, I think. It's just that it's so full of noise now, and often confusing, misleading, or wrong. As other posters have said, there may be pushback at whittling down the irrelevant tests, because that means fewer ad imprints. But I bet that if some of the experts in the systems being tested at least correct flaws or methodology in the existing tests, they'd be accepted. So having said that, maybe I should submit a patch for that infamous bonnie++ problem. ;) -Eric -- fedora-devel-list mailing list fedora-devel-list@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/fedora-devel-list