Re: The future of audio plugins ?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 20 Oct 2016 15:27:34 -0500
"Chris Caudle" <chris@xxxxxxxxxxxxxxx> wrote:

> At some point the proper conclusion is not that everyone on LAU is
> either mentally stunted, or maliciously taunting you, but rather that
> the original description was not actually clearly stated.

There is a limit to that.  There is a lot of noise that was brought on
and after a few times one can wonder, in good conscience, if trolling
is not in order. 

The two examples given (1) were only that.  There might be other uses
to the principle. 

This is not about mixing techniques.  If there's a need to discuss
mixing techniques, then a thread can be started.  Does this prospect
hinder the trigger-happy reaction of some ?

> I think a more concrete example would be helpful.   I don't consider
> your vaguely worded description of automating 3D placement
> sufficiently detailed. 

(1) There's also another example, the separation of instruments.  That
one must be crystal clear, though.  as well as the idea of sharing
information between plugins to achieve it.

> In the limit that would be something like an
> automated panner placing each track in a slightly different panned
> location, which does not sound like something useful or desirable to
> me.  Obviously you had in mind something which you would find useful.

Again this goes on mixing techniques which is not the main subject.
For instance, a certain framework setup style would have the drums
sitting behind the bass.  It is possible to spend the time to do that,
and people do.  But then it could also be useful to have this placement
done by computer analysis.  As a setup for further work.  That
was already mentioned. Another framework would contain electric guitars
behind the acoustic.  This is not, done by EQ or volume which would
affect a lot more, especially if used at that stage. The actual
physical ground must be set first by shaping the transients according
to how sounds are perceived by the brain.

> Yes, I did look at the Neutron video, it looked like something that
> would be useful for two months while you first learn how to use a
> DAW, and that would get turned off after you saw what "choices" it
> was making and why. If you are going to have to adjust everything it
> does to fit your taste anyway, might as well make the beginning
> settings yourself as well.  I assume it did not make you think that,
> so what did you see differently than I did?

An example was asked so this is the closest one that is related.  I'm
not fond of this neutron.  What the principle of sharing audio analysis
data will be in 10 years, is not known at the moment, obviously.

Now we have seen that related work is done on the polarity analysis in
Mixbus as well as the XT-TG plugin inasmuch that they push forward the
analysis part.  Although these two examples might be too much on the
developer side while the scope is on the suer side here.





_______________________________________________
Linux-audio-user mailing list
Linux-audio-user@xxxxxxxxxxxxxxxxxxxx
http://lists.linuxaudio.org/listinfo/linux-audio-user



[Index of Archives]     [Linux Sound]     [ALSA Users]     [Pulse Audio]     [ALSA Devel]     [Sox Users]     [Linux Media]     [Kernel]     [Photo Sharing]     [Gimp]     [Yosemite News]     [Linux Media]

  Powered by Linux