I ran a dnf upgrade on a bunch of machines. The first one segfaulted in the cleanup phase. While I was pondering what to do about it, maybe a cosmic ray hit an unfortunate capacitor in one of the RAM sticks perhaps, another one splurted this out:
Transaction Summary ================================================================================ Install 5 Packages Upgrade 159 Packages Remove 4 Packages Total download size: 514 M Is this ok [y/N]: y Downloading Packages: corrupted double-linked list Aborted (core dumped) A core dump showed this: Stack trace of thread 71232: #0 0x00007f401e6bf9e5 raise (libc.so.6 + 0x3c9e5) #1 0x00007f401e6a8895 abort (libc.so.6 + 0x25895) #2 0x00007f401e703857 __libc_message (libc.so.6 + 0x80857) #3 0x00007f401e70ad7c malloc_printerr (libc.so.6 + 0x87d7c)#4 0x00007f401e70babc unlink_chunk.constprop.0 (libc.so.6 + 0x88abc)
#5 0x00007f401e70e3aa _int_malloc (libc.so.6 + 0x8b3aa) #6 0x00007f401e710235 __libc_calloc (libc.so.6 + 0x8d235) #7 0x00007f4016b45f85 lr_malloc0 (librepo.so.0 + 0x1ef85)#8 0x00007f4016b37e55 lr_downloadtarget_new (librepo.so.0 + 0x10e55) #9 0x00007f4016b40842 lr_download_packages (librepo.so.0 + 0x19842) #10 0x00007f4016ca32ea _ZN6libdnf13PackageTarget16downloadPackagesERSt6vectorIPS0_SaIS2_EEb (libdnf.so.2 + 0x14a2ea) #11 0x00007f40151ed986 _wrap_PackageTarget_downloadPackages (_repo.so + 0x1e986)
This strongly suggests a bone-fide bug. A rerun of 'dnf upgrade' succeeded without any fanfare.
I'm fully updated out, but perhaps someone who has a bunch of updates to install could run dnf upgrade under valgrind, and see what shakes out.
Attachment:
pgpJWuRXH7vID.pgp
Description: PGP signature
_______________________________________________ users mailing list -- users@xxxxxxxxxxxxxxxxxxxxxxx To unsubscribe send an email to users-leave@xxxxxxxxxxxxxxxxxxxxxxx Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@xxxxxxxxxxxxxxxxxxxxxxx