On 03/09/2011 02:21 PM, Corentin Chary wrote:
The threaded VNC servers messed up with QEMU fd handlers without any kind of locking, and that can cause some nasty race conditions. The IO-Thread provides appropriate locking primitives to avoid that. This patch makes CONFIG_VNC_THREAD depends on CONFIG_IO_THREAD, and add lock and unlock calls around the two faulty calls. Thanks to Jan Kiszka for helping me solve this issue. Cc: Jan Kiszka<jan.kiszka@xxxxxx> Signed-off-by: Corentin Chary<corentin.chary@xxxxxxxxx> --- The previous patch was total crap, introduced race conditions, and probably crashs on client disconnections. configure | 9 +++++++++ ui/vnc-jobs-async.c | 24 +++++++++++++++++++----- 2 files changed, 28 insertions(+), 5 deletions(-) diff --git a/configure b/configure index 5513d3e..c8c1ac1 100755 --- a/configure +++ b/configure @@ -2455,6 +2455,15 @@ if test \( "$cpu" = "i386" -o "$cpu" = "x86_64" \) -a \ roms="optionrom" fi +# VNC Thread depends on IO Thread +if test "$vnc_thread" = "yes" -a "$io_thread" != "yes"; then + echo + echo "ERROR: VNC thread depends on IO thread which isn't enabled." + echo "Please use --enable-io-thread if you want to enable it." + echo + exit 1 +fi + echo "Install prefix $prefix" echo "BIOS directory `eval echo $datadir`" diff --git a/ui/vnc-jobs-async.c b/ui/vnc-jobs-async.c index f596247..d0c6f61 100644 --- a/ui/vnc-jobs-async.c +++ b/ui/vnc-jobs-async.c @@ -190,6 +190,18 @@ static void vnc_async_encoding_end(VncState *orig, VncState *local) queue->buffer = local->output; } +static void vnc_worker_lock_output(VncState *vs) +{ + qemu_mutex_lock_iothread(); + vnc_lock_output(vs); +} + +static void vnc_worker_unlock_output(VncState *vs) +{ + vnc_unlock_output(vs); + qemu_mutex_unlock_iothread(); +} + static int vnc_worker_thread_loop(VncJobQueue *queue) { VncJob *job; @@ -211,11 +223,11 @@ static int vnc_worker_thread_loop(VncJobQueue *queue) return -1; } - vnc_lock_output(job->vs); + vnc_worker_lock_output(job->vs); if (job->vs->csock == -1 || job->vs->abort == true) { goto disconnected; } - vnc_unlock_output(job->vs); + vnc_worker_unlock_output(job->vs); /* Make a local copy of vs and switch output buffers */ vnc_async_encoding_start(job->vs,&vs); @@ -236,7 +248,7 @@ static int vnc_worker_thread_loop(VncJobQueue *queue) /* output mutex must be locked before going to * disconnected: */ - vnc_lock_output(job->vs); + vnc_worker_lock_output(job->vs); goto disconnected; } @@ -255,7 +267,7 @@ static int vnc_worker_thread_loop(VncJobQueue *queue) vs.output.buffer[saved_offset + 1] = n_rectangles& 0xFF; /* Switch back buffers */ - vnc_lock_output(job->vs); + vnc_worker_lock_output(job->vs); if (job->vs->csock == -1) { goto disconnected; } @@ -266,10 +278,12 @@ disconnected: /* Copy persistent encoding data */ vnc_async_encoding_end(job->vs,&vs); flush = (job->vs->csock != -1&& job->vs->abort != true); - vnc_unlock_output(job->vs); + vnc_worker_unlock_output(job->vs); if (flush) { + qemu_mutex_lock_iothread(); vnc_flush(job->vs); + qemu_mutex_unlock_iothread(); } vnc_lock_queue(queue);
Acked-by: Paolo Bonzini <pbonzini@xxxxxxxxxx> for stable. For 0.15, I believe an iohandler-list lock is a better solution. Paolo -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html