Dong, Eddie wrote: > Avi Kivity wrote: >> On 06/28/2010 10:30 AM, Dong, Eddie wrote: >>>> Several milliseconds of non-responsiveness may not be acceptable for >>>> some applications. So I think queue_work_on() and a clflush loop is >>>> better than an IPI and wbinvd. >>>> >>>> >>> Probably we should make it configurable. For RT usage models, we do >>> care about responsiveness more than performance, but for typical >>> server useg model, we'd better focus on performance in this issue. >>> WBINVD may perform much much better than CLFLUSH, and a mallicious >>> guest repeatedly issuing wbinvd may greatly impact the system >>> performance. >>> >> I'm not even sure clflush can work. I thought you could loop on just >> the cache size, but it appears you'll need to loop over the entire >> guest address space, which could take ages. > > If RT usage model comes into reality, we may have to do in this way though pay with huge overhead :) > Is there any RT customers here? Yes, I know of a few (RT host + The Typical non-RT guest). One case already has to deal with wbinvd because it runs on older HW without trapping support. The issue is mitigated here by CPU isolation. But shared caches remain problematic, also with this new approach here (wbinvd not only flushes KVM's memory...). > >> So I guess we'll have to settle for wbinvd, just avoiding it when the >> hardware allows us to. > > Yes, for now I agree we can just use wbinvd to emulate wbinvd :) Having a switch even for the case the guest may have a potential need would be useful. Maybe controllable by user space, maybe something like emulate / skip / skip+report. There might be guests issuing wbinvd from drivers of devices that aren't passed through (I'm thinking of graphic adapters) while they don't for the passed-through ones. In that case, ignoring should be fine. Jan -- Siemens AG, Corporate Technology, CT T DE IT 1 Corporate Competence Center Embedded Linux -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html