I've been doing some research. I propose using: 1 - OProfile (as it is better documented and than sysprof and runs on more platforms/processors) for system-wide profiling i.e. finding _where_ the time is spent. 2 - PAPI instrumentation of the code for finding out _what_ it is doing that takes time. 3 - Ruby script for extracting a html report that is reasonably easy to understand (performance counters can be quite difficult to understand). The ruby script would use graphviz, gnuplot etc for generating suitable output. Step 2 could possibly be automated using the output from step 1, but I might be aiming a bit too high here. I further propose 3 scenarios: 1 - automatic performance report (make profile?) using a set of performance cases 2 - targeted performance report on operations. 3 - memory management profiling - for reporting on ram usage and disk usage and swapping (this might require other tools than OProfile/PAPI and it might again be aiming too high). The tool would only function with command-line based applications, that are not using any interactive input. As a matter of discipline, all profiling-runs should be written as tests and would therefore also guarantee the correct output of optimised operations. I think that covers your use-case, Sven. What do you think? /Henrik 2009/3/21 Sven Neumann <sven@xxxxxxxx> > > Hi, > > On Sat, 2009-03-21 at 14:12 +0100, Henrik Akesson wrote: > > Has anyone ever done a performance study of GEGL? > > > > What do you think of a GSoC project of: > > > > "Performance study and optimisation of GEGL." > > > > - Creating a multi-platform performance tool-set for automatically > > extracting performance data from the gegl library using performance > > counters > > - Creating a set of typical scenarios for gegl, which could double as > > integration/regression tests > > - Reporting on current status of gegl performance. > > - Identification of main bottlenecks > > - Prototyping or implementing solution for above bottlenecks. > > - Documenting above tools. > > That's a nice proposal as it starts exactly where optimization should > start, by getting profound profiling data. > > It might also be interesting to add a framework to GEGL that allows to > register optimized operations and to compare them against the reference > implementation. A similar approach is taken in babl. Doing this for GEGL > is admittedly going to be more complex, but it would provide an > interesting framework for improving the GEGL performance. Based on this > framework, people could contribute optimized code and can still be > certain that it provides the correct results. Such code could be > optimized for a particular color format (legacy 8bit for example) and/or > for particular CPUs (MMX, SSE, ...) or a GPU. > > > Sven > > _______________________________________________ Gegl-developer mailing list Gegl-developer@xxxxxxxxxxxxxxxxxxxxxx https://lists.XCF.Berkeley.EDU/mailman/listinfo/gegl-developer