> From: Andreas Klauer <Andreas.Klauer@xxxxxxxxxxxxxx> > Doesn't every QDisc work that way? When the kernel wants to send a packet, > it calls the appropriate dequeue() function in the QDisc. I'm not a kernel > developer so this guess might be wrong. That's correct, but this operation takes a packet from an OS queue and the only control the application has over that queue is to put something into it. One way to view the idea is that I want to make it convenient for the application to decide what to put into the queue at the latest possible time without losing any of its available bandwidth. Think in terms of an OS callback to the application saying "I'm ready to send your data now, what should I send?" > But still, I don't think that the queueing is the main problem with your > idea... the main problem is, how do you decide what's important and what > not, and what's obsolete? This is up to the application of course. See below. > From: Paul.Hampson@xxxxxxxxx (Paul Hampson) > I believe the general solution to this is to use UDP, and make sure The scheme I describe wouldn't make a lot of sense for tcp, which after all specifies congestion control, retransmission, etc. But UDP still goes through the queuing that I want to optimize. > your source machine doesn't queue up packets locally (eg. ethernet > network contention) and let the best-effort nature of UDP deal with > dropping stuff that gets delayed. The problem is that the OS is not helpful in avoiding queuing up packets locally. That's part of what I'm trying to fix. For instance, a relatively cheap approximation would be to give the application a way to see how many packets it has in the queue. Then it could at least delay its decision about what to put into the queue until the queue was short. Even better would be to see an estimate of how long it will be before the next packet it enqueues will be sent - like "your call will be answered in approximately 4 minutes". > I'm not sure there's any way to have an 'I changed my mind about > sending that' interface into your network stack... And generally > it wouldn't be useful, data spends longer in transit than it does > in your queues. That depends on the rate at which the queue is emptied. If your queue has a rate limit of 10bps then your packets can spend a long time in the queue. - There are slow links (For instance, I recall hearing that submarines have very low rates.) - The application might be allocated a small part of the bandwidth shared with other applications. It occurs to me that an example where this would be helpful is transmitting voice data over a low bandwidth link (like a cell phone). Suppose you know that the actual transit time is .1 sec and you want the listener to always hear what the speaker was saying .2 sec ago at the best possible quality. Suppose the available bandwidth is shared with other applications. The voice application doesn't know when they will want to send or how urgent their data might be. Someone else decides that. It just wants to send the best possible data in the bandwidth allocated to it. I imagine is continually sampling the input and revising what it considers to be the most valuable unsent data for the last .1 sec. Whenever the OS decides it's time to send the next voice packet I want it to send the latest idea of what's most valuable. I don't want to have to put data into the queue to wait for times that might depend on what urgent communication might be required by other applications. _______________________________________________ LARTC mailing list LARTC@xxxxxxxxxxxxxxx http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc