On Sat, Jul 09, 2005 at 08:25:39AM -0700, Don Cohen wrote: > > From: Paul.Hampson@xxxxxxxxx (Paul Hampson) > > I believe the general solution to this is to use UDP, and make sure > The scheme I describe wouldn't make a lot of sense for tcp, which > after all specifies congestion control, retransmission, etc. > But UDP still goes through the queuing that I want to optimize. > > your source machine doesn't queue up packets locally (eg. ethernet > > network contention) and let the best-effort nature of UDP deal with > > dropping stuff that gets delayed. > The problem is that the OS is not helpful in avoiding queuing up > packets locally. That's part of what I'm trying to fix. > For instance, a relatively cheap approximation would be to give > the application a way to see how many packets it has in the queue. > Then it could at least delay its decision about what to put into > the queue until the queue was short. Even better would be to > see an estimate of how long it will be before the next packet it > enqueues will be sent - like "your call will be answered in > approximately 4 minutes". > > I'm not sure there's any way to have an 'I changed my mind about > > sending that' interface into your network stack... And generally > > it wouldn't be useful, data spends longer in transit than it does > > in your queues. > That depends on the rate at which the queue is emptied. > If your queue has a rate limit of 10bps then your packets can spend > a long time in the queue. > - There are slow links > (For instance, I recall hearing that submarines have very low rates.) > - The application might be allocated a small part of the bandwidth > shared with other applications. Wait, you're trying to send more data than the link can take? Then send UDP, throttle it at the local end with a drop-oldest qdisc. Then you get the effect of 'most recent data is best'. Anything more compilcated in terms of priority either needs a custom qdisc, or your application needs to not try and send more than the link can take. > It occurs to me that an example where this would be helpful is > transmitting voice data over a low bandwidth link (like a cell phone). > Suppose you know that the actual transit time is .1 sec and you want > the listener to always hear what the speaker was saying .2 sec ago at > the best possible quality. > Suppose the available bandwidth is shared with other applications. > The voice application doesn't know when they will want to send or how > urgent their data might be. Someone else decides that. It just wants > to send the best possible data in the bandwidth allocated to it. I > imagine is continually sampling the input and revising what it > considers to be the most valuable unsent data for the last .1 sec. > Whenever the OS decides it's time to send the next voice packet I want > it to send the latest idea of what's most valuable. I don't want to > have to put data into the queue to wait for times that might depend on > what urgent communication might be required by other applications. You gotta prioritise your data, using TOS or diffserv or something. Set your voice to real-time, so it always gets sent, and the your other applications can use unused packet-times. Use a dropping qdisc for traffic where 'most-recent' is more important than 'all, in order' as described above, and you're set. I have a vauge recollection that this sort of thing is discussed in Tannenbaum's Computer Networks textbook, to do with positional data of satellites or something. (eg. if the positional data is delayed, we write it off, we don't want to delay the data about where we are _now_ in order to know where we were _then_) -- Paul "TBBle" Hampson, on an alternate email client. _______________________________________________ LARTC mailing list LARTC@xxxxxxxxxxxxxxx http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc