(2012/01/27 9:59), Takuya Yoshikawa wrote:
We have seen delays of over 30 seconds doing a large (128GB) unmap.
It'd be nicer to check if the amount of work to be done by the entire
flush is less than the work to be done iterating over each HVA page,
but that information isn't currently available to the arch-
independent part of KVM.
Using the number of (active) shadow pages may be one way.
See kvm->arch.n_used_mmu_pages.
Ah, sorry, you are looking for arch independent information.
Better ideas would be most welcome ;-)
I will soon, this weekend if possible, send a patch series which may
result in speeding up kvm_unmap_hva() loop.
... and I also need to check if my work can be naturally implemented by
arch independent manner.
Takuya
Though my work has been done for optimizing a different thing, dirty
logging, I think this loop will also be optimized.
I have checked that dirty logging improved significantly,
so hope that your case will also.
So, in addition to your patch, please see to what extent my patch series
will help your case, if possible.
Thanks,
Takuya
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html