Re: rtirq

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 19 Mar 2015, Ralf Mardorf wrote:

On Wed, 18 Mar 2015 12:13:51 -0700 (PDT), Len Ovens wrote:
Two drives allows simultanious read and write.

Not really ;), at least here the recordings usually are read from the
same disk they were saved to ;). I guess that loading an app or lib, or
loading a soundfont from one disk, while recording to another disk is
something that not really happens.

Just pulled up one of my ardour projects. The whole 10 track project complete with exported audio (this is one song) is 1.1Gb. In other words there were no disk writes done while tracking. Ardour wrote to disk, but the writes only wrote to memory buffers where they sat, handy to be played back at the next take. And this one was originally recorded on a machine with 2.5Gb before I got the new one with 8Gb.

When the idea of recording audio on computers first came out, most people who did so (16bit audio only and certainly no higher that 48k) had two whole 600Mb drives just so they could have enough space. That old experience sort of left me with this "audio takes a lot of space" feeling. But really, this is not true any more. 8Gb ram is small in todays world in a studio. There are people who do enough work just with a browser, that find 8G too small (over 100 tabs at once for their work). It is not hard to find 32 or 64gb in a home machine and really, 8 to 16gb "should" give lots of room for sample storage and such so that the drive never gets accessed. 64GB may make sense for someone recording and mixing down a live project.... but then that would be straight audio and no samples to pull from disk anyway. Maybe someone doing long pieces with a lot of softsynths might hit a wall where they are actually doing disk access while recording, but not for a 5 minute song.

More interesting would be to get more information about the timer issue.
I'm using hr timer for MIDI, but when I tried to start jackd with hpet,
I always got messages similar to "not enough timers available" (I can't
start an audio session right now, I will post the correct messages
at another time). If we remove RTC from the config, do we need to
replace it with something else? Using hr timer for MIDI reduced MIDI
jitter on my machine, but I never had an hr timer entry in rtirq.

Firewire MIDI to Jack MIDI should be sample accurate no matter if any timers are used as each midi event is marked as being part of a specific sample. Even with ALSA MIDI, getting it from the MIDI port to Jack as quick as can be makes sense. I do not know how MIDI gets time stamped in Jack. I am guessing that all MIDI events within a buffer time get the same time stamp? So for 64/2, any MIDI events within that 64 samples would have the same time stamp as either the first or last sample. So jack's latency would matter. (maybe someone can correct me)

RTP-MIDI does have a time stamp, but I do not know if it is related to samples or just time. If time, then linux would have to be running a precise clock to make use of it. Ravenna sends midi as 4 bytes in an audio packet, where at 32 bits it takes the same space and shares the same sample number with audio. I have explained that wrong :) It is sent as a channel along with the audio but is marked as MIDI and so shares the same timing. Netjack also keeps MIDI sample aligned BTW.

Personally, I have not been able to _hear_ MIDI jitter, even when using a USB MIDI port. Certainly not with a PCI based port. Any round trip measurments made show latency but not jitter. I use A2JMidid for my MIDI inputs. The thing is, any incoming midi will be buffer aligned and sending round trip should keep it within the same buffer anyway. Remember that midi is a separate port from audio and does not have an audio clock. One midi event would be the same as 24 samples with the sample rate at 32k. A three note chord with one bass note would take 96 samples of time just to get from the kb to the buffer. This means that for four notes all played at the same time, the first and the last note are already separated by 72 samples, that is, jack will time stamp some of those events as two different times already. This is a limit of the MIDI protocol. The thought being that:

a: nobody hits notes all at once (this may not be true)
b: we can't really tell if notes closer together than 10ms or so are at the same time or not.

Note: I have not been accurate for the sake of making things easy. A midi "byte" is really 10 bits worth of time, not 8, and nobody uses 32k :) But it is easy to see that jack using buffer size time stamping is quite reasonable considering that at any low latency setting for jack, one MIDI happening using even 4 events, is longer than the buffer time already. On a kb (even with my very beginner playing) 5 and 6 note stuff is very common. So I have to ask myself if what you are hearing is just the effects of a slow standard MIDI transport before the info even gets to the computer.


--
Len Ovens
www.ovenwerks.net

_______________________________________________
Linux-audio-user mailing list
Linux-audio-user@xxxxxxxxxxxxxxxxxxxx
http://lists.linuxaudio.org/listinfo/linux-audio-user




[Index of Archives]     [Linux Sound]     [ALSA Users]     [Pulse Audio]     [ALSA Devel]     [Sox Users]     [Linux Media]     [Kernel]     [Photo Sharing]     [Gimp]     [Yosemite News]     [Linux Media]

  Powered by Linux