Re: JACK Freewheel mode thoughts

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, 11 Nov 2016, Lorenzo Sutton wrote:

I was recently thinking about how powerful, yet maybe underestimated, the JCAK freewheel mode [1] is. As far as I know this is used by Ardour, Qtractor and MusE for export/bounce, 'internally'.

Call me old school, but I like to use various (lots) of audio software on Linux in a modular way. I have stated many times that, in my humble opinion, modularity plus inter-connectability of applications though JACK is, from a creative point of view, a killer feature.

So, coming to the freewheel point. Wouldn't it be great if 'all jack audio' software were freewheel-ready, so that if I activate a switch, all applications enter freewheel mode and when I activate recording in my favourite DAW, connected to some of my favourite (standalone!) synths, connected to my favourite sequencer, everything is recorded 'faster than realtime' in the DAW?

As far as I know, most of this should be possible right now. Jack does tell all clients when it enters freewheel. Really what is the difference from freewheel to real time? Each client still spends the same time processing as based on the same SR. Jack just doesn't wait for the hardware to catch up at the end of each cycle (or doesn't worry about being on time for the HW) but rather starts the next cycle imediately. So long as the client is not connected to any HW, there should not need to be any changes. Really, all a client needs to do for freewheel is make sure it is not connected to HW.

On the practical side, what does freewheel gain? No monitoring is possible, and any feewheel process results in changes that in general can't be changed without redoing whatever was done in freewheel. So the user must listen to the input in real time and then the output in real time for QA. So it still limits freewheel to exports or track consolidation. The one other possibility I can see is creating a track with a synth that requires more horsepower than the CPU provides. So MIDI out from DAW to expensive (standalone) synth. Audio from the synth back in to DAW to be recorded. In this case the freewheel process might be much longer than real time and could even include swap events. making music from a program could be done this way too. Though I would think live programming is more common.

The way to think of this is to use Audacity for a bit. All effects are done outside of real time. Take the audio do the effect then listen. The result is destructive. But, there are effects such as (some) noise reduction that can be done no other way too. There may be some effect that requires knowing the input performance from end to end in order to calculate it. So most freewheel stuff would take longer to do rather than shorter. It would be possible to even use some process that requires more than one pass to complete (don't change anything between passes).

--
Len Ovens
www.ovenwerks.net

_______________________________________________
Linux-audio-user mailing list
Linux-audio-user@xxxxxxxxxxxxxxxxxxxx
http://lists.linuxaudio.org/listinfo/linux-audio-user



[Index of Archives]     [Linux Sound]     [ALSA Users]     [Pulse Audio]     [ALSA Devel]     [Sox Users]     [Linux Media]     [Kernel]     [Photo Sharing]     [Gimp]     [Yosemite News]     [Linux Media]

  Powered by Linux