lartc mentions ways for "traffic policing" to throttle incoming rates by throttling outgoing ack packets 2008/3/24 Sunil Ghai <sunilkrghai@xxxxxxxxx>: > > > > On Wed, Mar 12, 2008 at 11:50 AM, Bruno Wolff III <bruno@xxxxxxxx> wrote: > > > > On Wed, Mar 12, 2008 at 01:59:46 -0700, > > Andrew Farris <lordmorgul@xxxxxxxxx> wrote: > > > > > > I was thinking more along the lines of just the local machine's behavior > > > with different connections having higher or lower priority for outbound > > > (which is often what hurts response time the most for slower connections > > > while longer running transfers occur). I really don't know how > effective > > > QOS is, so it may be a bad way to approach this issue. > > > > You would still need to write the rules that do the shaping. However some > > applications set QoS (particularly to distinguish between interactive and > > bulk traffic) so it can be useful to look at. For outbound packets you > > are OK, for inbound not so much. > > > > > > > If an update connection had low priority for the bandwidth resources, > that > > > connection should be postponed whenever a higher priority connection > wants > > > to push outbound traffic. A browser then would get to send its page > > > requests or acks ahead of running transfer packets from the update > utility; > > > the result would be a much more responsive browser while still using > most > > > of the available bandwidth. Whether the QOS flags are being > > > stripped/mangled once the traffic leaves the local machine should not > > > really hurt that improvement would it? > > > > It makes it hard to handle inbound traffic which you may also need to > > manage. Though in a particular case that may not be a bottleneck. In your > > case it looks like you will be needing to throttle inbound traffic, so > > this is relevant. The way this shaping is done you either drop some > packets > > from the connections you want to slow down or you set bits in the > > acknowledgement thay say the sender should slow down as if you had dropped > > the packet. Not all network stacks support the later feature, but I don't > > know what fraction do these days. It might be in practice almost everyone > > does. > > So you aren't blocking outbound requests in order to prevent applications > > from retrieving data. That kind of approach would be a lot different and > > have to be customized to each application. > > > > > > > > I'm just thinking it may not require full end-to-end to enjoy some > benefit. > > > The incoming connection would not be slowed or postponed to let the > browser > > > respond, but by not acking what comes in until the outbound clears up I > > > think it might help anyway. > > > > You don't really want to drop all packets, just some. The sender is > supposed > > to back off with an exponential reduction in send rate until packets stop > > getting dropped. If you block all of them, the application will likely > assume > > the connection has been broken and stop working. Generally throttling and > > giving priority to low latency packets should work fairly well. > > > > > > > > > > In case of dynamic throttling we won't be having any _fixed_ rate at which > the connections assigned for updates will be able receive the packets. It > means packets would be dropped frequently to implement policing. Isn't this > waste of resources? > > Tools like tc and tcng implement queues to control outbound data. Is there > any similar _kind of_ option available for inbound data? > (Obviously we can't have queues because once the packet has been received > must be processed) > > -- > Regards, > Sunil Ghai > -- > fedora-devel-list mailing list > fedora-devel-list@xxxxxxxxxx > https://www.redhat.com/mailman/listinfo/fedora-devel-list > -- fedora-devel-list mailing list fedora-devel-list@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/fedora-devel-list