Re: question about FOSS WFS implentations

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On Dec 17, 2013 5:29 AM, "Fons Adriaensen" <fons@xxxxxxxxxxxxxx> wrote:
>
> On Mon, Dec 16, 2013 at 11:13:18PM -0500, Ivica Ico Bukvic wrote:
>
> > Some preliminary research reveals several FOSS implementations of
> > WFS (wave field synthesis). What is not entirely clear is how these
> > implementations stack up to something like Sonic Emotion. I presume
> > they will be subpar but the question is by how much and in what
> > ways?
> >
> > Ok, now another question. Is anyone aware of a 3D FOSS WFS
> > implementation (multiple horizontal rows) and how hard would it be
> > to use?
>
> Given the number of speakers used, the S.E. system can't be pure
> WFS except at very low frequencies. It probably uses a combination
> of techniques: WFS, some things based on AMB theory, delays, etc.
> but they won't tell you more.

I had an opportunity to listen to their system and even though it was quite sparse it still delivered a very compelling image even close to the speakers. If the system was using anything in the way of ambisonics it would have done so only using the actual horizontal array. Given that highest repreducible frequency is directly related to the distance between the speakers I am wondering if this may be because they are also trying to render waves from virtual speakers as they propagate through the real speakers as well as using a selection of speakers to render certain sounds as per recent publications in this area.

>
> But the main difference to open source systems are to be found not
> in the rendering system, but in the one used to create and define
> the content. In commercial systems this will be very visual, hide
> the technicalities, and probably integrate with protools. It will
> also be closed and let you do predefined things only. All that
> makes it easier to use for the non-expert.
>
> Open source systems tend to provide less in this area, but will
> have interfaces that allow you to define your own production
> workflow and tools, usually via OSC. For example, the system
> which I developed and installed in Parma will let you control
> the position and smooth movements of virtual sources via OSC,
> but little more. Anything else has to be build on top of this.
> The main tool used here in Parma is a 'mixer' that instead of
> really mixing its inputs, controls the rendering engine instead,
> while also taking care of changing e.g. reverb levels and delays
> in function of source position. For static sources that is
> almost everything you need, apart from standard production
> tools. For more dynamic setups I either write ad-hoc code
> (usually Python, but you could use SC, Pd, Csound...), or
> plugins sending OSC from automation tracks in Ardour.
>
> A WFS system using two or more rows wouldn't be real 3D WFS,
> the vertical component would use conventional panning. A real
> 3D WFS system would require filling the walls with speakers.
>
> Ciao,
>
> --
> FA
>
> A world of exhaustive, reliable metadata would be an utopia.
> It's also a pipe-dream, founded on self-delusion, nerd hubris
> and hysterically inflated market opportunities. (Cory Doctorow)
>
> _______________________________________________
> Linux-audio-user mailing list
> Linux-audio-user@xxxxxxxxxxxxxxxxxxxx
> http://lists.linuxaudio.org/listinfo/linux-audio-user

_______________________________________________
Linux-audio-user mailing list
Linux-audio-user@xxxxxxxxxxxxxxxxxxxx
http://lists.linuxaudio.org/listinfo/linux-audio-user

[Index of Archives]     [Linux Sound]     [ALSA Users]     [Pulse Audio]     [ALSA Devel]     [Sox Users]     [Linux Media]     [Kernel]     [Photo Sharing]     [Gimp]     [Yosemite News]     [Linux Media]

  Powered by Linux