Re: [PATCH 0/2] Fix for a very old KVM bug in the segment cache

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 7/13/24 03:38, Maxim Levitsky wrote:
1. Getting rid of the segment cache. I am not sure how much it helps
these days - this code is very old.

2. Using a read/write lock - IMHO the cleanest solution but might
also affect performance.

A read/write lock would cause a deadlock between the writer and the sched_out callback, since they run on the same CPU.

I think the root cause of the issue is that clearing the cache should be done _after_ the writes (and should have a barrier() at the beginning, if only for cleanliness). So your patch 1 should leave the clearing of vmx->segment_cache.bitmask where it was.

However, that would still leave an assumption: that it's okay that a sched_out during vmx_vcpu_reset() (or other functions that write segment data in the VMCS) accesses stale data, as long as the stale data is not used after vmx_vcpu_reset() returns. Your patch is a safer approach, but maybe wrap preempt_disable()/preempt_enable() with

	vmx_invalidate_segment_cache_start() {
		preempt_disable();
	}
	vmx_invalidate_segment_cache_end() {
		vmx->segment_cache.bitmask = 0;
		preempt_enable();
	}

Paolo





[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux