On Sun, Sep 27, 2009 at 10:59:06AM +0200, rosea grammostola wrote: > I'm trying to understand the fantastic reverb jconv. I know how I can > run it. It's not a reverb, but a general-purpose convolution engine - which of course can be used for reverb. > 1. in/out is number of ins and outs > 2 partition is frames and should be the same as in jackd(?) > 3. maxsize is length of the file? In what format (kb?), should I > customize it using different files? What app can I use to determine the > length of the file? How? > 4. Should I do anything with delay/offset/length? Seems you didn't read README.CONFIG... Please do, let me know if anything isn't clear. > 5. I'm especially interested in the settings for guitar > (if you know some nice Jazz IR files, tell me) The parameters in jconv's config are there mainly to allow you to use many IR files without first having to modify them. That in turn is because I can't provide 'prepared' IRs in many cases, either the data files are very big, or their license does not allow me to distribute them. But the effect is mostly determined by the impulse response itself. So there really are no 'typical jazz' parameters... > 6. For what kind of instruments do you use reverb? Depends on a lot of things, including the type of music and the type of sound you want to create in a mix. There are basically two reasons to use reverb: 1. To create a 'natural' sound, i.e. one that includes the acoustics of a real space, or something that could be a real space. In most cases, if the 'real space' is not something special such as a church, the listener would not really be aware of the reverb and certainly not hear it as an effect. It would just add realism, provide a idea of the dimensions of the space, and create depth - some instruments being closer than others. This is what you would do for classical music and in general for anything called 'acoustic'. In that case, if you start with dry recordings, you would add reverb on *all* instruments and voices, but not the same amount on all. 2. As an effect, mostly applied to a single voice or instrument. Here anything goes, and you can use types of reverb that don't correspond to any real space or that even defeat the laws of physics, e.g. a reverb that cuts off before it has fully decayed, delayed reverbs, very dense or heavily filtered ones, etc. There are also two ways to 'wire' a reverb unit into a mixer. A. The 'traditional' way (from the analog multitrack days) is to use a post-fader auxiliary send on each channel to send a controlled amount of signal to the reverb. The signal from the reverb is then mixed in just as any stereo track would. In that case the reverb should be 100% 'wet' as the dry part follows its normal way through the mixer. For an IR reverb you need to cut of the direct sound, as is done in most examples that come with jconv (that's mainly why the offset parameter is used for). This method allows to share a reverb among many channels, while still allowing to control each of them individually. It is what you would do for use case 1. above. When using a digital mixer you could also use small delays on either the dry sound or the reverb send to enhance the sense of depth (that's less traditional, as analog mixers couldn't do this). B. The second way more corresponds to use case 2 above, just use a reverb as an insert on a single channel. In that case you need a different one for each channel. And of course both methods can be and often are combined. Ciao, -- FA Io lo dico sempre: l'Italia è troppo stretta e lunga. _______________________________________________ Linux-audio-user mailing list Linux-audio-user@xxxxxxxxxxxxxxxxxxxx http://lists.linuxaudio.org/mailman/listinfo/linux-audio-user