On 8/26/2013 8:04 AM, Gleb Natapov wrote: > On Sun, Aug 25, 2013 at 09:26:47PM -0400, Chris Metcalf wrote: >> On 8/25/2013 7:39 AM, Gleb Natapov wrote: >>> On Mon, Aug 12, 2013 at 04:24:11PM -0400, Chris Metcalf wrote: >>>> This change provides the initial framework support for KVM on tilegx. >>>> Basic virtual disk and networking is supported. >>>> >>> This needs to be broken down to more reviewable patches. >> I already broke out one pre-requisite patch that wasn't strictly KVM-related: >> >> https://lkml.org/lkml/2013/8/12/339 >> >> In addition, we've separately arranged to support booting our kernels in a way that is compatible with the Tilera booter running at the highest privilege level, which enables multiple kernel privilege levels: >> >> https://lkml.org/lkml/2013/5/2/468 >> >> How would you recommend further breaking down this patch? It's pretty much just the basic support for minimal KVM. I suppose I could break out all the I/O related stuff into a separate patch, though it wouldn't amount to much; perhaps the console could also be broken out separately. Any other suggestions? >> > First of all please break out host and guest bits. Also I/O related stuff, > like you suggest (so that guest PV bits will be in separate patch) and > change to a common code (not much as far as I see) with explanation why > it is needed. (Why kvm_vcpu_kick() is not needed for instance?) I broke it down into three pieces in the end: the basic host support, the basic guest PV support, and the virtio/console support. The first piece is still much the biggest. I found that the generic kvm_vcpu_kick() is fine, so I removed the custom version (which predated the generic version in our internal tree). Explanations are now in the git commit comments. >>> Also can you >>> describe the implementation a little bit? Does tile arch has vitalization >>> extension this implementation uses, or is it trap and emulate approach? >>> If later does it run unmodified guest kernels? What userspace are you >>> using with this implementation? >> We could do full virtualization via trap and emulate, but we've elected to do a para-virtualized approach. Userspace runs at PL (privilege level) 0, the guest kernel runs at PL1, and the host runs at PL2. We have available per-PL resources for various things, and take advantage of having two on-chip timers (for example) to handle timing for the host and guest kernels. We run the same userspace with either the host or the guest. >> > OK, thanks for explanation. Why have you decided to do PV over trap and > emulate? Performance and simplicity; I added comments to the git commit to provide a rationale. -- Chris Metcalf, Tilera Corp. http://www.tilera.com -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html