Rick Stevens writes:
IIRC, on process startup ld checks to see if the desired _shared_ library is already present in RAM and only loads it from disk if no copy already exists in memory (that's the whole point of shared libraries--only one copy of the _code_ section is needed). So even if a
Actually, it's not the runtime loader that explicitly does this, itself. All that the runtime loader does is open the shared library and mmap it into the process space. It's actually the kernel that notices that the same inode is already mmaped, and just links the already-mmaped pages to the new process, marking them copy-on-write (which will only make any difference with the data sections' pages, since the code sections are readonly and would never get copied).
It's the kernel's job to keep track of these things, not userspace's.
library was updated, the new version won't be used unless _all_ processes currently using the old version shut down and a new process is launched that needs that library. The only way to ensure you're using the latest and greatest version of any given library is to do a reboot to kill all the existing processes. Whether to run a new kernel at that reboot is up to you.
I am 100% confident that this is not true. I'm so confident that I don't even want to bother building a simple demonstration, with a helloworld() sample shared library, that will trivially show that this is not true.
I build and install my shared libraries, with an existing running daemon still having the old, uninstalled version mmaped in its process space. Sometimes I even go through a build/upgrade cycle more than once, before restarting the daemon. I have no issues, whatsoever, with testing new code that links to the new version, and still have the old daemon putter along, until I restart the new version. If this were actually true, I would not be able to build and link with the new C++ library, and its changed ABI, and I would get immediate runtime segfaults after linking with the new library, but still loading the old version at runtime because the existing daemon still has the old shared library loaded. That would be a rather rude, and impolite thing to do.
This is not a novel concept, and Unix worked this way long before Linux ever existed. You could open a filehandle, replace the file, and have the existing process continue using the file without any issues; while all new processes get the new one.
This basic concept of how Unix handled inodes has been in common knowledge for many decades. The kernel does not physically delete the file after its inode reference count goes down to 0 until all existing open file descriptors are also closed, if there are any for the same inode. Until that happens, there is no noteworthy difference between this open file descriptor and some other one. If some other process happens to create a new file with the same name, purely by luck of the draw, the kernel will hardly notice, or care, and will than gladly offer its services to access the contents of the new file to any other process that has the requisite permissions to open it. Perhaps even the same process that still has the deleted file opened via another file descriptor – it can open() the same filename and get the new file instead.
Let's do a quick experiment. Let's open two terminal windows and execute the following, in /tmp (or /var/tmp, if you like that directory better), in the first window:
[mrsam@octopus tmp]$ cat >foo The quick Brown fox Jumped over The lazy dog's Tail <<<<CTRL-D>>>> [mrsam@octopus tmp]$ exec 3<foo [mrsam@octopus tmp]$ while read bar
do read foobar <&3 echo $foobar done
<<<<ENTER>>>> The quick <<<<ENTER>>>> Brown foxNow, leave this terminal window, for just a teensy-weensy moment, and switch to the second one. There, we'll execute the following:
[mrsam@octopus tmp]$ rm foo # Buh-bye! [mrsam@octopus tmp]$ cat >foo Mary had a little lamb its fleece was white as snow and everywhere mary went the lamb was sure to go. <<<<CTRL-D>>>> [mrsam@octopus tmp]$ cat foo Mary had a little lamb its fleece was white as snow and everywhere mary went the lamb was sure to go. [mrsam@octopus tmp]$ We will now return to the first terminal, and drop the mic: <<<<ENTER>>>> Jumped over <<<<ENTER>>>> The lazy dog's ^CHeavens to Betsy! One can delete a file, replace it, use it, and still have some other existing process have no issues, whatsoever, screwing around with the deleted file. We just witnessed something amazing, for just a brief moment: two processes having the same filename open, with one reading the new file, and the other one keeping its tenuous grasp on the old file, and was able to continue reading it afterwards.
( Feel free to repeat this experiment by creating "foo.new", then renaming it to "foo", like how dnf/rpm does it; the results will be the same )
There is no valid technical reason why a live 'dnf upgrade' should not work. Period. Full stop. There's nothing else to discuss. The subject matter is closed. Any actual problems that happen must be solely due to crappy software, somewhere. The only undetermined piece of information is the precise identification of the crappy software in question (which would be responsible for the aforementioned problems) but it won't be the upgrade process per se, and there will not be any valid, solid, technical excuse for that, in total.
Whether the unidentified crappy software in question could actually be dnf, or some crappy GUI wrapper for dnf which flips out when its own shared libraries get replaced (and it will REALLY have to go out of its way, almost be intentionally crappy, in order to even realize that its own shared libraries were replaced, see above) or if it's whatever's actually upgraded – that's something that someone else can figure out.
I'm not disputing that a live dnf upgrade might be problematic in some cases. It's just that there is no valid, technical, fundamental reason why it must be a problem that cannot be avoided. Everything but the kernel itself – including the C library itself, and even including the unmentionable abomination for an init process – should be flawlessly[1] upgradable. After all, this is Linux and not Microsoft Windows.
[1] flawlessly, n.: at the minimum, "dnf upgrade -y && reboot" should finish upgrading all packages, and successfully reboot the system; and typically the system is expected to remain perfectly stable without rebooting, or at least stable enough to reboot manually.
Attachment:
pgpCqzAUFAckt.pgp
Description: PGP signature
_______________________________________________ users mailing list -- users@xxxxxxxxxxxxxxxxxxxxxxx To unsubscribe send an email to users-leave@xxxxxxxxxxxxxxxxxxxxxxx