On Sun, 2024-11-24 at 10:21 +1100, Stephen Morris wrote: > In the past when I have lost internet access because my wifi > disconnected from the router and reconnected, I get a popup message > from Fedora that the wifi disconnected followed immediately by a > popup message saying it had reconnected. That didn't happen in this > case. If you lose connection to your WiFi router, your computer will try and reconnect, and can give you notifications about it. If your computer didn't lose connection, but the router stopped routing, or something in front of it did, you won't get disconnection notices from the computer (because the "connection" between it and the router never changed). You may get offline notices from software that cannot connect to a server, and you may get reconnection notices from them, too. > In the past when I have lost internet access because my wifi > disconnected from the router and reconnected, I get a popup message > from Fedora that the wifi disconnected followed immediately by a > popup message saying it had reconnected. That didn't happen in this > case. More info is certainly better. If you have *working* stuff in the middle of non-working things, it gives more info about where the fault may lay. > It is possible my router/modem were playing up, as afterwards I was > having issues with maintaining internet connections in Windows while > playing a steam game, which restarting the modem and router > rectified. That's a very significant bit of information. Things do glitch, routers are no exception. A reboot is sometimes necessary. Also, they can get software updates over the net which can make them sluggish until its all over and done with (either self- updating, or pushed updates on ISP-supplied routers). It's important to note that a router has a finite number of connections it can manage, and processing power available for its tiny operating system. If you have some software that's gone mad making hundreds of connections, and not dropping them, the router can stop working. If it gets bombarded by a flurry of external activity it can get overwhelmed. If you have dozens of things on your network, PCs, laptops, phones, tablets, WiFi controlled light fightings, etc., you can exceed its capacity to handle that (or handle that well). I've had routers that struggle to operate its in-built web server based configuration controller. There's also outside interference. Cluttered WiFi broadcasts by your neighbours on the same channel can bog things down. And I had an ADSL router get continously knocked off by the switch mode power supply connected to a portable hard drive. Every couple of seconds the radiated hash would kill the routers attempts to communicate. It was so bad it didn't matter where in the house it was plugged in, the computer didn't even have to be connected to the network. The moment the drive was plugged in, even if the computer was off, its radiated hash got into the house earth wiring, which transmitted it to the phone cable going around the house. > But if the repositories in question were not actually refreshed as > the error messages were indicating, why did a reissue of the command > not refresh them, had DNF invalidly marked them as refreshed when > they actually weren't, or were the error messages bogus, or did DNF > retry after producing the messages and not told us it had > successfully refreshed the repositories? I'll have to let someone else answer that specifically, but in general: On the net lots of things used cached data (browsers, domain name lookups, lots more). The next time you attempt to access the same data it will/can * Look at its cached data, see if it's still supposed to be valid (according to meta data expiry dates and its own settings about cache expiry), and use it. * Reach out to the remote end and see if its expired and should be refreshed. It may still do this, even if its already cached data is considered still in date. * Try or not try to refresh it. * Use the cached data if that's fine, use the cached data if it's unable to refresh it and isn't too stale (according to various metadata), or abort if it considers it's too stale. Going on past experience, if I have done a "dnf update", then did another one within a certain time period no attempt was made to see if data needed refreshing. That time period could be longer than you'd expect it. Some people get into a bad habit of doing a "clean all" before updating, which increases the workload and traffic of the repo servers. There are more nuanced controls for cleaning *some* of the local data, but it shouldn't be necessary, it should be automatic. And if your local data is being continually mangled, you have other problems you should be solving. -- uname -rsvp Linux 3.10.0-1160.119.1.el7.x86_64 #1 SMP Tue Jun 4 14:43:51 UTC 2024 x86_64 Boilerplate: All unexpected mail to my mailbox is automatically deleted. I will only get to see the messages that are posted to the mailing list. -- _______________________________________________ users mailing list -- users@xxxxxxxxxxxxxxxxxxxxxxx To unsubscribe send an email to users-leave@xxxxxxxxxxxxxxxxxxxxxxx Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@xxxxxxxxxxxxxxxxxxxxxxx Do not reply to spam, report it: https://pagure.io/fedora-infrastructure/new_issue