Hi Sean, > > > diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c > > > index 75fa6dd268f0..c2fe89ecdb2d 100644 > > > --- a/arch/x86/kvm/svm/sev.c > > > +++ b/arch/x86/kvm/svm/sev.c > > > @@ -465,6 +465,7 @@ static void sev_clflush_pages(struct page *pages[], unsigned long npages) > > > page_virtual = kmap_atomic(pages[i]); > > > clflush_cache_range(page_virtual, PAGE_SIZE); > > > kunmap_atomic(page_virtual); > > > + cond_resched(); > > > > If you add cond_resched() here, the frequency (once per 4K) might be > > too high. You may want to do it once per X pages, where X could be > > something like 1G/4K? > > No, every iteration is perfectly ok. The "cond"itional part means that this will > reschedule if and only if it actually needs to be rescheduled, e.g. if the task's > timeslice as expired. The check for a needed reschedule is cheap, using > cond_resched() in tight-ish loops is ok and intended, e.g. KVM does a reched > check prior to enterring the guest. Double check on the code again. I think the point is not about flag checking. Obviously branch prediction could really help. The point I think is the 'call' to cond_resched(). Depending on the kernel configuration, cond_resched() may not always be inlined, at least this is my understanding so far? So if that is true, then it still might not always be the best to call cond_resched() that often.