> > the short answer, in the terms your question was posed in, is datastream > management. > > the slightly longer answer is that pure data (and max) are derived from > the original Music N language's conception of how to manage this kind of > thing (the same ideas still found in CSound and SuperCollider). even > though they have gone far beyond it, they continue to distinguish > between audio & control datastreams for reasons that are mostly related > to efficiency. there are many, many cases where control data being > delivered at a bits-per-second rate significantly below that of audio > (or even as a stream of events rather than a constant flow) is more than > adequate, and saves a lot of CPU cycles. > hello. thanks for the answer, so in a way it is a rational construction to save cpu time. exactly this i figured out for instance with the snapshot conversion and a fast metro, and i thought it might be better to have just one stream of data instead of converting the different streams back and forth, but this thinking was wrong. i am amused that this still is a issue at these times. m _______________________________________________ Linux-audio-user mailing list Linux-audio-user@xxxxxxxxxxxxxxxxxxxx http://lists.linuxaudio.org/mailman/listinfo/linux-audio-user