Hi Peter, Peter Robinson <pbrobinson@xxxxxxxxx> writes: >> I'm having an issue with two different wandboard quad systems; one is >> running F22, the other is running F23. When the system is under high >> network load, specifically high transmit load, after a while the network >> just gives up. Technically it's not VERY high load, only about 2MB/s, >> but it's high transmit load -- high download load seems to be fine as >> far as I can tell. I know that "gives up" isn't a very technical term, >> but I frankly don't know what else to call it. > > Rev B or C? The one I'm working on right now says Rev C1. I don't know the rev of the other one -- I'd have to go open it up to see. I can do that if you want the answer, but it's actually my production mythtv backend (and still running F22) so I can't really run a bunch of tests on that. >> After I do this the system has network again. However it's quite >> frustrating that I have to go through all these hoops. Note that just >> pulling the network cable by itself does not seem sufficient to reset >> the network. > > what happens if you "rmmod fec; sleep 5; modprobe fec" does that have > the same effect as all of the above? Mostly, yes. I had to run: ifconfig eth0 down; rmmod fec; sleep 5; modprove fec (without the ifconfig down the rmmod didn't work). But that did bring it back to life. I should note that as of about 5:30am I was going to respond to Sean and say that "the upgrade to 4.4.3-300.fc23.armv7hl fixed it." However while it IS better (lasting about 18 hours versus 2-4) , it did eventually die around 6:30am (when I manually ran a du -sh to see how much of the backup had completed in 18 hours). So clearly something between 4.2.3 and 4.4.3 improved the situation, but didn't completely correct it. >> Is this a hardware problem or a software problem (or a combination of >> the two)? I've had it happen on this one system three times today; I >> can definitely reliably repeat it (although it does take a couple hours >> until it dies). It's also happened on another system, but I've not seen >> it happen since I stopped pulling data from it. > > If it's the former it should be able to be worked around with the > later. I've not seen it but then I don't use my WBQ for high load. The > i.MX6 onboard NICs do have a through put issue in that they can't do > line speed Gbit, but rather top out around 450mbps (if memory serves) > but that shouldn't affect stability. Right now I think I'm CPU bound via encfs running AES so I think it's fine that I can't hit the full 1Gbps. However I suspect the current set of ARM solutions available might not be the right platform for my backup server. Although openssl speed on my Wandboard says I should be getting 20MB/s, I appear to only be getting about 1MB/s. (My laptop seems to be able to do about 100MB/s according to openssl -- strangely "openssl engine" does not report "aesni" on my F23 x86_64 laptop). > Peter -derek -- Derek Atkins, SB '93 MIT EE, SM '95 MIT Media Laboratory Member, MIT Student Information Processing Board (SIPB) URL: http://web.mit.edu/warlord/ PP-ASEL-IA N1NWH warlord@xxxxxxx PGP key available _______________________________________________ arm mailing list arm@xxxxxxxxxxxxxxxxxxxxxxx http://lists.fedoraproject.org/admin/lists/arm@xxxxxxxxxxxxxxxxxxxxxxx