On 07/14/2011 03:36 AM, Avi Kivity wrote:
On 07/14/2011 10:14 AM, Umesh Deshpande wrote:
Following patch is implemented to deal with the VCPU and iothread
starvation during the migration of a guest. Currently iothread is
responsible for performing the migration. It holds the qemu_mutex
during the migration and doesn't allow VCPU to enter the qemu mode and
delays its return to the guest. The guest migration, executed as an
iohandler also delays the execution of other iohandlers. In the
following patch, the migration has been moved to a separate thread to
reduce the qemu_mutex contention and iohandler starvation.
@@ -260,10 +260,15 @@ int ram_save_live(Monitor *mon, QEMUFile *f, int
stage, void *opaque)
return 0;
}
+ if (stage != 3)
+ qemu_mutex_lock_iothread();
Please read CODING_STYLE, especially the bit about braces.
Does this mean that the following code is sometimes executed without
qemu_mutex? I don't think any of it is thread safe.
That was my reaction too.
I think the most rational thing to do is have a separate thread and a
pair of producer/consumer queues.
The I/O thread can push virtual addresses and sizes to the queue for the
migration thread to compress/write() to the fd. The migration thread
can then push sent regions onto a separate queue for the I/O thread to
mark as dirty.
Regards,
Anthony Liguori
Even just reading memory is not thread safe. You either have to copy it
into a buffer under lock, or convert the memory API to RCU.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html