David Anderson wrote:
* Anthony Liguori <aliguori@xxxxxxxxxx> [2006-03-02 14:14:38]:
Just the hostname and port actually. The /xend part is always implied
by the Xend protocol.
Right. Is there a preferred formatting for the name parameter?
'http://dom0.whatever:8000' is a little annoying to parse, and doesn't
specify that the connection is to a Xen daemon, which could be a
problem for future support of other virtualization engines. How about
'xen:tcp:dom0.whatever.net:8000' ?
Please no :-)
Ideally, we don't want anything specifying the protocol since this is
going to change in the near future. Specifying the port is probably
extraneous too since I don't expect people to be frequently varying the
port.
I would think changing virConnectOpen(const char *) to
virConnectOpen(const char *host, int port) and have port default to
something sane if say 0 is specified would be nice.
Daniel: what do you think?
Other than that, I'm factoring out the connection step into a single
function that handles both readonly and write connections, and I'm
wondering about the handling. In virConnectOpenReadOnly, all three
methods (hypervisor, xenstore, xend) are tried in turn, and the only
criterion for success is that one succeed. However, in
virConnectOpen, all three must succeed. Would it be desirable to
unite these three, and if not could you explain why there is a
difference in handling of failure between the two?
I think virConnectOpenReadOnly should continue to fail if host != NULL.
There is no read-only TCP equivalent current (although I've been
thinking of adding this to Xend in the future).
My question here is all to do with figuring out how to handle remote
xend connections. If the connection is remote, then the hypervisor and
xenstore connections will inevitably fail. And if that is acceptable
in the case of remote connections, then surely the two opening
functions could be factored into a single one that always tolerates
partial failure?
I'd have to think about this a bit more. Having a different open makes
it clear to the user that certain ops may fail.
Actually, looking through the code, it seems to me that the xend
interface can accomplish almost all of the current API by itself (and
the few functions it doesn't provide could probably be implemented).
Is the direct hypervisor/xenstore access kept in only for performance
reasons?
The hypervisor calls are kept for performance (although I'm not
convinced this is necessary :-)) and the xenstore access is kept for
"read-only" access. Xenstore offers (on localhost) a lesser privileged
read only port which lets you get some domain information (which is
useful for things like a Xen gnome applet).
This my fault. The backend doesn't expose the VCPU pinning set because
I was too lazy to parse it in the backend code. You can certainly set
the number of VCPUs and do hotplugging with the existing code.
Erm, you can? Through the official API, and while the guest is
running? I grepped for 'cpu' in the source code, and that returned
only querying functions.
I'm talking about the backend code. The number of VCPUs (which is fixed
at build time) is set as part of the domain configuration information.
I didn't realize that xend_set_vcpus was the hotplugging interface!
I'll have to ask Ryan about what they exposed it this way. It seems
like a bad protocol decision.
However, it's pretty easy to support. You just need a function like:
int
xend_set_vcpus(virConnectPtr xend, const char *name, int count)
{
char buffer[1024];
snprintf(buffer, sizeof(buffer), "%d", count);
return xend_op(xend, name, "op", "set_vcpus", "vcpus", buffer,
NULL);
}
Then plumb it up to a first class interface.
Do you need to be able to pin VCPUs to particular domains?
Not yet (though it'd be a nice feature to have, for completedness :)
), but being able to set the general number of vcpus for a domain,
without pinning them to any specific one would be cool.
Yeah, okay, that's totally reasonable and should be in libvirt right
now. If you want, you can write up a quick patch to add it.
Nope, this is just what it's for :-)
Ah, good. Then patches will be coming soon :)
Excellent :-)
Regards,
Anthony Liguori
- Dave, still a little puzzled.