Hello everyone,
I'm trying to discover what's causing the behavior in the benchmark
described below,
in short my question is:
why are multiple connections from the same host more efficient than
the same number of connections from different hosts?
I suspect is something kernel-related but a confirmation would help a
lot.
I have 1 receiver process, let's call it R.
It accepts incoming connections and receives data using select().
I have 3 sender processes S1, S2, S3.
They send data to R at a fixed rate, i.e. 200Mbit/s each.
If S1, S2 and S3 are on the same machine, i get better results than
having each one of them on a different machine.
(R is in both cases on some other machine)
Example:
R on host0, S1, S2, S3 on host2,
R receives at 600Mbit/s
R on host0, S1 on host1, S2 on host2, S3 on host3,
R receives at 480Mbit/s
This looks counterintuitive for me, I expected the opposite since in
the second case senders don't have to share the network card and the
processor
(Not that expect processor or network card to be bottlenecks...)
Some more details:
the hosts above are nodes in a linux cluster with a dedicated full-
duplex Gigabit switch.
They are running 2.6.24-24-generic (latest Ubuntu i guess).
--
To unsubscribe from this list: send the line "unsubscribe linux-net" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html