Re: [Qemu-devel] Re: [libvirt] Re: [PATCH 2/3] Introduce monitor 'wait' command

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Jamie Lokier wrote:
Anthony Liguori wrote:
It doesn't. When an app enables events, we would start queuing them, but if it didn't consume them in a timely manner (or at all), we would start leaking memory badly.

We want to be robust even in the face of poorly written management apps/scripts so we need some expiration function too.

What happens when an app stops reading the monitor channel for a
little while, and there's enough monitor output to fill TCP buffers or
terminal buffers?  Does it block QEMU?  Does QEMU drop arbitrary bytes
from the stream, corrupting the output syntax?

Depends on the type of character device. They all have different properties in this regard. Basically, you're stuck in a losing proposition. Either you drop output, buffer memory indefinitely, or put the application to sleep. Different character devices make different trade offs.

If you send events only to the monitor which requests them, then you
could say that they are sent immediately to that monitor, and if the
app stops reading the monitor, whatever normally happens when it stops
reading happens to these events.

In other words, no need for arbitrary expiration time.  Makes it
determinstic at least.

You're basically saying that if something isn't connected, drop them. If it is connected, do a monitor_printf() such that you're never queuing events. Entirely reasonable and I've considered it.

However, I do like the idea though of QEMU queuing events for a certain period of time. Not everyone always has something connected to a monitor. I may notice that my NFS server (which runs in a VM) is not responding, VNC to the system, switch to the monitor, and take a look at the event log. If I can get the past 10 minutes of events, I may see something useful like a host IO failure.

Monitor "sessions" are ill-defined though b/c of things like tcp:// reconnects so I wouldn't want to do that.

Oh dear.  Is defining it insurmountable?

Why can't each TCP (re)connection be a new monitor?

You get a notification on reconnect but not on disconnect. Basically CharDriverState is not designed around a connect model. The fact that it has any notion of reconnect today is really a big hack.

CharDriverState could definitely use a rewrite.  It hasn't aged well at all.

Regards,

Anthony Liguori

--
Libvir-list mailing list
Libvir-list@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/libvir-list

[Index of Archives]     [Virt Tools]     [Libvirt Users]     [Lib OS Info]     [Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite News]     [KDE Users]     [Fedora Tools]