Re: [58crew] RE: IETF58 - Network Status

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 19-nov-03, at 23:16, Perry E.Metzger wrote:

I think there is some middle ground between 25000 and 10 ms.

10ms is the middle ground. That's enough for a bunch of retransmits on
modern hardware.

Retransmits on what type of hardware? At 1 Mbps transmitting a 1500 byte packet already takes 12 ms, without any link layer overhead, acks/naks or retransmits. End-to-end retransmits take even longer because of speed of light delays.,


Coupled with aggressive FEC, that's more than enough time.

FEC sucks because it also eats away at usable bandwidth when there are no errors.


But the problem with sharing the airwaves is that you can't be
sure how long it's going to take to deliver packets.

Actually, the speed of light is remarkably deterministic.

Yes, but unfortunately, bit errors aren't.


If the
network is so loaded that you can't send a packet in that period, you
should drop so that all the TCPs back off.

Absolutely not. This leads to constant packet loss because of minor bursts, which TCP reacts very badly to. Try setting the output queues of your friendly neighborhood router to something extremely low and you'll see what I mean.


The packet dumps I got from the 802.11b networks during the worst
periods at IETF revealed what you would readily expect -- that TCP
collapses badly when the underlying network does something very dumb.

So let's:


1. Make sure access points don't have to contend with each other for airtime on the same channel
2. Make sure access points transmit with enough power to be heard over clients associated with other access points
3. Refrain from using too much bandwidth
4. Make use of higher-bandwidth wireless standards such as 802.11g


By the way, it would also be a good idea if the standard did proper
power control of the mobile stations.

Why? Raising the power output of the stuff you want to hear over these clients is much, much simpler.


Also, all of this makes it sound like the network was very bad in Minneapolis. That isn't my experience: I usually had good bandwidth, with the exception of just a couple of sessions, and I ended up associated with an ad-hoc network only a few times.

By the way, I did some testing today and the results both agree with and contradict conventional wisdom with regard to 802.11 channel utilization. When two sets of systems communicating over 802.11b/g are close together, they'll start interfering when the channels are 3 apart (ie, 5 and 8), slowing down data transfer significantly. This indicates that in the US only three channels can be used close together, but four in Europe: 1, 5, 9, 13. However, with the two sets of stations are NOT close together (but still well within range), such as with a wall in between them, 3 channels apart doesn't lead to statistically significant slowdowns, and even 2 channels apart is doable: 25% slowdown at 802.11b and 50% slowdown at 802.11g. So that would easily give us four usable channels in the US (1, 4, 8, 11) and 5 in Europe (1, 4, 7, 10, 13), or even six / seven (all odd channels) in a pinch. (As always, your milage may vary. These results were obtained with spare hardware lying around my house.)



[Index of Archives]     [IETF Annoucements]     [IETF]     [IP Storage]     [Yosemite News]     [Linux SCTP]     [Linux Newbies]     [Fedora Users]