On Sun, Feb 27, 2011 at 08:27:01PM +0100, Jan Kiszka wrote: > On 2011-02-27 20:16, Alon Levy wrote: > > On Sun, Feb 27, 2011 at 08:11:26PM +0100, Jan Kiszka wrote: > >> On 2011-02-27 20:03, Alon Levy wrote: > >>> On Sat, Feb 26, 2011 at 01:29:01PM +0100, Jan Kiszka wrote: > >>>> On 2011-02-26 12:43, xming wrote: > >>>>> When trying to start X (and it loads qxl driver) the kvm process just crashes. > >>> > >>> This is fixed by Gerd's attached patch (taken from rhel repository, don't know > >>> why it wasn't pushed to qemu-kvm upstream). I'll send it to kvm list as well (separate email). > >> > >> Patch looks OK on first glance, but the changelog is misleading: This > >> was broken for _both_ trees, but upstream didn't detect the bug. > >> > > > > The trees the patch commit message refers to are qemu and qemu-kvm. > > The same did I. > > > qemu doesn't even have cpu_single_env. > > Really? Check again. :) > > > It didn't talk about two qemu-kvm trees. > > > >> My concerns regarding other side effects of juggling with global mutex > >> in spice code remain. > > > > I know there used to be a mutex in spice code and during the upstreaming process it > > got ditched in favor of the qemu global io mutex. I would have rather deferred this > > to Gerd since he wrote this, but he is not available atm. > > It's not necessarily bad to drop the io mutex, but it is more tricky > than it may appear on first glance. The problem with not dropping it is that we may be in vga mode and create updates synthtically (i.e. qemu created and not driver created) that access the framebuffer and need to be locked so the framebuffer isn't updated at the same time. We drop the mutex only when we are about to call the dispatcher, which basically waits on red_worker (a libspice-server thread) to do some work. red_worker may in turn callback into qxl in qemu, which may try to acquire the lock. (the many may's here are just reflections of the codepaths). > > Jan > -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html