Re: [PATCH v2 2/2] KVM: SEV: Prefer WBNOINVD over WBINVD for cache maintenance efficiency

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Jan 14, 2025 at 8:13 AM Sean Christopherson <seanjc@xxxxxxxxxx> wrote:
>
> On Tue, Jan 14, 2025, Kirill A. Shutemov wrote:
> > On Mon, Jan 13, 2025 at 10:47:57AM -0800, Kevin Loughlin wrote:
> > > On Fri, Jan 10, 2025 at 12:23 AM Kirill A. Shutemov
> > > <kirill.shutemov@xxxxxxxxxxxxxxx> wrote:
> > > >
> > > > On Thu, Jan 09, 2025 at 10:55:33PM +0000, Kevin Loughlin wrote:
> > > > > @@ -710,6 +711,14 @@ static void sev_clflush_pages(struct page *pages[], unsigned long npages)
> > > > >       }
> > > > >  }
> > > > >
> > > > > +static void sev_wb_on_all_cpus(void)
>
> In anticipation of landing the targeted flushing optimizations[*] in the near-ish
> future, I would prefer to avoid the "on_all_cpus" aspect of the name.
>
> Maybe sev_writeback_caches(), to kinda sorta pair with sev_clflush_pages()?
> Definitely open to other names.
>
> [*] https://lore.kernel.org/all/20240411140445.1038319-1-szy0127@xxxxxxxxxxx

Ack, will do.

> > > > > +{
> > > > > +     if (boot_cpu_has(X86_FEATURE_WBNOINVD))
> > > > > +             wbnoinvd_on_all_cpus();
> > > > > +     else
> > > > > +             wbinvd_on_all_cpus();
> > > >
> > > > I think the X86_FEATURE_WBNOINVD check should be inside wbnoinvd().
> > > > wbnoinvd() should fallback to WBINVD if the instruction is not supported
> > > > rather than trigger #UD.
> > >
> > > I debated this as well and am open to doing it that way. One argument
> > > against silently falling back to WBINVD within wbnoinvd() (in general,
> > > not referring to this specific case) is that frequent WBINVD can cause
> > > soft lockups, whereas WBNOINVD is much less likely to do so. As such,
> > > there are potentially use cases where falling back to WBINVD would be
> > > undesirable (and would potentially be non-obvious behavior to the
> > > programmer calling wbnoinvd()), hence why I left the decision outside
> > > wbnoinvd().
>
> Practically speaking, I highly doubt there will ever be a scenario where not
> falling back to WBINVD is a viable option.  The alternatives I see are:
>
>   1. Crash
>   2. Data Corruption
>   3. CLFLUSH{OPT}
>
> (1) and (2) are laughably bad, and (3) would add significant complexity in most
> situations (to enumerate all addresses), and would be outright impossible in some.
>
> And if someone opts for WBNOINVD, they darn well better have done their homework,
> because there should be precious few reasons to need to flush all caches, and as
> evidenced by lack of usage in the kernel, even fewer reasons to write back dirty
> data while preserving entries.  I don't think it's at all unreasonable to put the
> onus on future developers to look at the implementation of wbnoinvd() and
> understand the implications.

Yeah, you and Kirill have convinced me that falling back to WBINVD is
the way to go. I'll do that in v3 shortly here; thanks!





[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux