I've just had a segfault from one of the qemu-kvm virtual machines we run. This is qemu-kvm 0.12.2 running with the in-kernel kvm modules on linux 2.6.32.7 on a dual quad-core Xeon E5420 machine, with ksm enabled. The backtrace looks like #0 vnc_update_client (vs=0x83f0, has_dirty=18) at vnc.c:908 #1 0x00000000004c015b in vnc_refresh (opaque=<value optimized out>) at vnc.c:2305 #2 0x0000000000405f50 in qemu_run_timers (ptimer_head=0x836cc0, current_time=1606536889) at /packages/qemu-kvm-0.12/src-gktOMQ/vl.c:1127 #3 0x0000000000408edf in main_loop_wait (timeout=1000) at /packages/qemu-kvm-0.12/src-gktOMQ/vl.c:4036 #4 0x0000000000421d7a in kvm_main_loop () at /packages/qemu-kvm-0.12/src-gktOMQ/qemu-kvm.c:2121 #5 0x000000000040b755 in main (argc=<value optimized out>, argv=0x7fffcc2fa1b8, envp=<value optimized out>) at /packages/qemu-kvm-0.12/src-gktOMQ/vl.c:4209 and the segfault itself is rather puzzling. #0 vnc_update_client (vs=0x83f0, has_dirty=18) at vnc.c:908 908 if (vs->need_update && vs->csock != -1) { (gdb) p vs $1 = (VncState *) 0x83f0 (gdb) p *vs Cannot access memory at address 0x83f0 The call site in vnc_refresh() looks like: vs = vd->clients; while (vs != NULL) { rects += vnc_update_client(vs, has_dirty); vs = vs->next; } but when I go up a stack frame and look at the vd over which this loop would be iterating: (gdb) up #1 0x00000000004c015b in vnc_refresh (opaque=<value optimized out>) at vnc.c:2305 2305 rects += vnc_update_client(vs, has_dirty); (gdb) p *vd->clients $2 = {csock = 17, ds = 0x19b2760, dirty = {{0, 0, 0, 0} <repeats 293 times>, {50331648, 0, 0, 0}, {50331648, 0, 0, 0}, {50331648, 0, 0, 0}, {50331648, 0, 0, 0}, {16777216, 0, 0, 0}, {16777216, 0, 0, 0}, {16777216, 0, 0, 0}, {16777216, 0, 0, 0}, {16777216, 0, 0, 0}, {16777216, 0, 0, 0}, {16777216, 0, 0, 0}, {16777216, 0, 0, 0}, {50331648, 0, 0, 0}, {0, 0, 0, 0} <repeats 1742 times>}, vd = 0x1ef60b0, need_update = 0, force_update = 0, features = 0, absolute = 0, last_x = -1, last_y = -1, vnc_encoding = 0, tight_quality = 0 '\0', tight_compression = 0 '\0', major = 0, minor = 0, challenge = '\0' <repeats 15 times>, output = {capacity = 1036, offset = 0, buffer = 0x1ec7420 "RFB 003.008\n¦\177"}, input = {capacity = 0, offset = 0, buffer = 0x0}, write_pixels = 0, send_hextile_tile = 0, clientds = {flags = 0 '\0', width = 0, height = 0, linesize = 0, data = 0x0, pf = {bits_per_pixel = 0 '\0', bytes_per_pixel = 0 '\0', depth = 0 '\0', rmask = 0, gmask = 0, bmask = 0, amask = 0, rshift = 0 '\0', gshift = 0 '\0', bshift = 0 '\0', ashift = 0 '\0', rmax = 0 '\0', gmax = 0 '\0', bmax = 0 '\0', amax = 0 '\0', rbits = 0 '\0', gbits = 0 '\0', bbits = 0 '\0', abits = 0 '\0'}}, audio_cap = 0x0, as = {freq = 44100, nchannels = 2, fmt = AUD_FMT_S16, endianness = 0}, read_handler = 0x4bdb30 <protocol_version>, read_handler_expect = 12, modifiers_state = '\0' <repeats 255 times>, zlib = {capacity = 0, offset = 0, buffer = 0x0}, zlib_tmp = {capacity = 0, offset = 0, buffer = 0x0}, zlib_stream = {{next_in = 0x0, avail_in = 0, total_in = 0, next_out = 0x0, avail_out = 0, total_out = 0, msg = 0x0, state = 0x0, zalloc = 0, zfree = 0, opaque = 0x0, data_type = 0, adler = 0, reserved = 0}, {next_in = 0x0, avail_in = 0, total_in = 0, next_out = 0x0, avail_out = 0, total_out = 0, msg = 0x0, state = 0x0, zalloc = 0, zfree = 0, opaque = 0x0, data_type = 0, adler = 0, reserved = 0}, {next_in = 0x0, avail_in = 0, total_in = 0, next_out = 0x0, avail_out = 0, total_out = 0, msg = 0x0, state = 0x0, zalloc = 0, zfree = 0, opaque = 0x0, data_type = 0, adler = 0, reserved = 0}, {next_in = 0x0, avail_in = 0, total_in = 0, next_out = 0x0, avail_out = 0, total_out = 0, msg = 0x0, state = 0x0, zalloc = 0, zfree = 0, opaque = 0x0, data_type = 0, adler = 0, reserved = 0}}, next = 0x0} (gdb) p vd->clients.next $3 = (VncState *) 0x0 So the first client in vd is fine, and the next pointer is set to zero, not 0x83f0. Some sort of race where a client disconnects and is removed from the client list while the vnc_refresh() loop is iterating over it, maybe? Cheers, Chris. -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html