[linux-audio-user] Audio routing software

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Quoting Patrick Shirkey <pshirkey@xxxxxxxxxxxxxxxxx>:

> I assume that when you talk about using the .asoundrc you are using the
> pcm copy function.
For the moment I'm only using my multichannel card, so there was no need yet to copy streams.

An excerpt of the .asoundrc file :
pcm.d410ch12 {
        type plug
        ttable.0.0 1
        ttable.0.1 1
        slave.pcm ice1712
}
plays on the first two channels (living room), while
pcm.d410ch1234 {
        type plug
        ttable.0.0 1
        ttable.0.1 1
        ttable.0.2 1
        ttable.0.3 1
        slave.pcm ice1712
}
would play on 4 channels. (living room + bathroom).
 
> Using multiple cards as one is not gauranteed to work very well unless
> their wordclocks can be linked together. If this is not done the samples
> will start to get out of sync and in my experience become unlistenable.
Is this also true when you are not mixing different sound cards in one room ? For instance playing an mp3 on two channels of my multichannel card (first room) and on two channels of another card (second room). When trying to do Dolby 7.1 in my living room, only a multichannel card can prevent clock problems.

> I wonder if it is now possible to make aRTsd use the pcm_jack plugin?
> That would allow you to use all sound apps through aRTsd and JACK at one
> time.
As I understand, using all these frameworks introduces a third layer :
- player (connects to jack, artsd, ...)
- connector (eg jack that connects to the real device)
- driver (for instance alsa)
Is there a way to influence the behavior of the second layer (I would name it the connector) while playing a sound ? This is what I need in order to perform 'live' routing.

Initially I was thinking of avoiding jack and just creating some 'virtual oss devices', one for each source. For instance /dev/mp3_out, /dev/tv_out, /dev/radio_out. Then I would have to write some software that connects the different incoming streams of these devices to the real devices. However, I'm afraid that this would mean a lot of overhead (copying raw streams) and latency problems. The great advantage would be that any application that knows how to talk to oss can be integrated.

Just my thoughts...

-K-


[Index of Archives]     [Linux Sound]     [ALSA Users]     [Pulse Audio]     [ALSA Devel]     [Sox Users]     [Linux Media]     [Kernel]     [Photo Sharing]     [Gimp]     [Yosemite News]     [Linux Media]

  Powered by Linux