Joachim Worringen wrote:
It's either the driver or the hardware - I suspect the latter. I
experienced significant problems with Nvidia Ethernet hardware and the
nforcedeth driver under load, in many machines.
My recommendation is to use another NIC if possible. I then switchted
to Intel, which worked flawlessly in the same machines.
Alternatively, you could try to increase the kernel buffers for
network packets as in your case, you seem to loose the packets further
up:
# sysctl -w net.core.rmem_max=8388608
# sysctl -w net.core.wmem_max=8388608
# sysctl -w net.core.rmem_default=65536
# sysctl -w net.core.wmem_default=65536
There are buffers for ipv4 like you can tune with
# sysctl -w net.ipv4.tcp_rmem='4096 87380 8388608'
- I don't know if this is the same for ipv6.
Joachim
I already raised the kernel buffers, with no success. I doubt its a
hardware issue (unless its a general design flaw) because I have two
identical boards with the same problem and also read about similar
problems other users had (with forcedeth).
Changing the NIC is unfortunately not a long-term option, its onboard
and there is no PCI(e) slot. Using a USB NIC (which is my current
workaround) I cannot netboot and I really don't want to run this box
with two NICs in the long run...
Especially since I do *not* have any problems with high bandwidth
ipv4/unicast traffic, so I really think this is a driver issue that can
be fixed. Does anyone have a pointer to the forcedeth code that could be
relevant in this ipv6/mcast setup?
Thanks,
Christian
--
To unsubscribe from this list: send the line "unsubscribe linux-net" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html