On 10/22/10 23:24, Joan Quintana wrote: > I had the idea in mind to test my machine (and trying to benchmark the tests), loading the session with a chain of JACK clients, in order to know the limits of my system and in what conditions the system is stressed, and when I would have more chances of XRUNS. > > The chain would be something like this. > > *playing a midi file with Rosegarden (a midifile full of tracks) > *fluidsynth as a soft synth, loading a heavy soundfont. > *JACK RACK for LADSPA effects (load several processor consuming effects) > *recording the session into Ardour, at the same time that monitoring the output to the speakers > > Meanwhile I will monitor the system performance (processor & RAM). That's required for the test, not for the result. Basically you want to test _reliability_ and _determinism_. At some point all CPUs should be at 100% and you even want to exceed the physical memory-limit so that the system starts to page/swap. On a properly set-up RT audio-system there should be no dropouts. > (I thing that Conky System Monitor would do the task of saving a log > file for later parsing). I don't know conky, but 'dstat --output <file>' might be handy, it can write CSV value files that are easy to parse for evaluation. > I don't know if it is possible to fetch the number of XRuns from a > file or log. If you start jackd 'yourself': you can use a regexp as qjackctl does (Setup->Options->Capture stdout): /xrun of at least ([0-9|\.]+) msecs/ If you use jack2mp with dbus enabled you can grab them from ~/.log/jack/jackdbus.log > Questions: > -how can I stress even more this test? Add some disk-I/O (`updatedb` or `find /`) and a couple of random unrelated tasks that cause context-switches (download sth, read email,..). > -is it possible to make this process standard, searching for a general method trying to say if this machine, this configuration or this OS is better than other? a prerequisite for that would be that it's an automated script (+ some predefined session files for rosegarden, ardour, redistributable sound-fonts, etc) and verifiable output. So that one can run the full benchmark a couple of times. > -is there something left that I need to take into account? Maybe test with different jack buffer-sizes/latencies. > -is all that a good idea? yes. It'd be a good thing to run before taking a setup on stage or to some recording session :) > thanks (this is just an idea, I won't have time in two weeks to implement something similar). Rome wasn't built in a day. I suggest to make it modular so that additional tests can be added later. > Joan Quintana > www.joanillo.org Cheers! robin _______________________________________________ Linux-audio-user mailing list Linux-audio-user@xxxxxxxxxxxxxxxxxxxx http://lists.linuxaudio.org/listinfo/linux-audio-user