I recently replaced my graphics card (with NVIDIA GT 710). A newer nvidia module was installed kmod-nvidia-340xx-4.15.7-200 -> kmod-nvidia-390.42-1 I also enabled VDPAU which was incorrectly installed until now. I now have a failure of 'gthumb': $ gthumb -v (gthumb:7097): Gdk-ERROR **: The program 'gthumb' received an X Window System error. This probably reflects a bug in the program. The error was 'BadLength (poly request too large or internal Xlib length erro'. (Details: serial 159 error_code 16 request_code 152 (DRI2) minor_code 1) (Note to programmers: normally, X errors are reported asynchronously; that is, you will receive the error a while after causing it. To debug your program, run it with the GDK_SYNCHRONIZE environment variable to change this behavior. You can then get a meaningful backtrace from your debugger if you break on the gdk_x_error() function.) Trace/breakpoint trap (core dumped) and /var/log/messages includes a trace: Apr 2 15:03:32 e4 systemd-coredump[20472]: Process 20469 (gthumb) of user 500 dumped core. Stack trace of thread 20469: 0 0x00007f4312e31e51 _g_log_abort (libglib-2.0.so.0) 1 0x00007f4312e344a1 g_log_writer_default (libglib-2.0.so.0) 2 0x00007f4312e329ee g_log_structured_array (libglib-2.0.so.0) 3 0x00007f4312e32ce7 g_log_structured (libglib-2.0.so.0) 4 0x00007f43140ec351 _gdk_x11_display_error_event (libgdk-3.so.0) 5 0x00007f43140f9813 gdk_x_error (libgdk-3.so.0) 6 0x00007f43103da9ba _XError (libX11.so.6) 7 0x00007f43103d78eb handle_error (libX11.so.6) 8 0x00007f43103d8a94 _XReply (libX11.so.6) 9 0x00007f430cb27f2b DRI2Connect (libGL.so.1) 10 0x00007f430cb274a8 n/a (libGL.so.1) 11 0x00007f430cb0b5c0 n/a (libGL.so.1) 12 0x00007f430cb07c60 glXQueryVersion (libGL.so.1) 13 0x00007f4311c1f153 _cogl_winsys_renderer_connect (libcogl.so.20) 14 0x00007f4311bd822d cogl_renderer_connect (libcogl.so.20) 15 0x00007f4315237824 clutter_backend_real_create_context (libclutter-1.0.so.0) 16 0x00007f4315250a13 _clutter_feature_init (libclutter-1.0.so.0) 17 0x00007f4315261ca9 clutter_init_real (libclutter-1.0.so.0) 18 0x00007f43155342f6 post_parse_hook (libclutter-gtk-1.0.so.0) 19 0x00007f4312e384f0 g_option_context_parse (libglib-2.0.so.0) 20 0x00005564d3a9661f gth_application_local_command_line (gthumb) 21 0x00007f43133e5e46 g_application_run (libgio-2.0.so.0) 22 0x00005564d3a1bcfe main (gthumb) 23 0x00007f431229d88a __libc_start_main (libc.so.6) 24 0x00005564d3a1bd6a _start (gthumb) Stack trace of thread 20470: 0 0x00007f4312381a5d poll (libc.so.6) 1 0x00007f4312e2c579 g_main_context_iterate.isra.25 (libglib-2.0.so.0) 2 0x00007f4312e2c68c g_main_context_iteration (libglib-2.0.so.0) 3 0x00007f4312e2c6d1 glib_worker_main (libglib-2.0.so.0) 4 0x00007f4312e53516 g_thread_proxy (libglib-2.0.so.0) 5 0x00007f431265936d start_thread (libpthread.so.0) 6 0x00007f431238db4f __clone (libc.so.6) I see that the gthumb package was updated in 2017, dnf.log says: Aug 19 18:09:33 DEBUG ---> Package gthumb.x86_64 1:3.4.3-1.fc24 will be upgraded Aug 19 18:09:33 DEBUG ---> Package gthumb.x86_64 1:3.4.5-1.fc26 will be an upgrade The update was part of a f24->f26 upgrade. There is no later version of gthumb but I do see version 3.6.0 in f27. I also see that this gthumb version is one year old: $ ls -l /usr/bin/gthumb -rwxr-xr-x 1 root root 988384 Apr 20 2017 /usr/bin/gthumb The gthumb package depends on the libvdpau package so I suspect some incompatibility. Reinstalling both packages does not improve the situation. Is gthumb still maintained? https://github.com/GNOME/gthumb suggests perusing https://wiki.gnome.org/Apps/gthumb which is not present. Still, too many packages are involved, not the least X/nvidia ones. Is anyone else seeing this problem? I do not want to log a bug if this is the result of my system long update history, but I do want to resolve this. I have another f26 machine that has no problems but does not use vdpau (or nvidia card). TIA -- Eyal Lebedinsky (fedora@xxxxxxxxxxxxxx) _______________________________________________ users mailing list -- users@xxxxxxxxxxxxxxxxxxxxxxx To unsubscribe send an email to users-leave@xxxxxxxxxxxxxxxxxxxxxxx