Re: [PATCH v4] RISC-V: Show accurate per-hart isa in /proc/cpuinfo

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Aug 24, 2023 at 03:06:53PM -0700, Evan Green wrote:
> On Thu, Aug 24, 2023 at 10:29 AM Conor Dooley <conor@xxxxxxxxxx> wrote:
> > On Thu, Aug 24, 2023 at 09:18:16AM -0700, Evan Green wrote:
> > > On Thu, Aug 24, 2023 at 5:20 AM Conor Dooley <conor.dooley@xxxxxxxxxxxxx> wrote:
> > > > On Tue, Jul 11, 2023 at 01:18:30PM -0700, Evan Green wrote:

> > > > > +"isa" vs "hart isa" lines in /proc/cpuinfo
> > > > > +------------------------------------------
> > > > > +
> > > > > +The "isa" line in /proc/cpuinfo describes the lowest common denominator of
> > > > > +RISC-V ISA extensions understood by the kernel and implemented on all harts. The
> > > > > +"hart isa" line, in contrast, describes the set of extensions understood by the
> > > > > +kernel on the particular hart being described, even if those extensions may not
> > > > > +be present on all harts in the system. The "hart isa" line is consistent with
> > > > > +what's returned by __riscv_hwprobe() when querying for that specific CPU.
> > > >
> > > > Thinking about this again, I don't think this is true. hwprobe uses
> > > > has_fpu(), has_vector() etc that interact with Kconfig options but the
> > > > percpu isa bitmap isn't affected by these.
> > >
> > > Ugh yeah it's kind of a mishmash isn't it. hwprobe_isa_ext0() uses the
> > > lowest common denominator for FD, C, V, but per-hart info for
> > > Zba,Zbb,Zbs. Given the interface, per-hart info seems like what we
> > > should have done there, and the FD, C, and V were my bad. The good
> > > news is we can define new bits that do the right thing, though maybe
> > > we should wait until someone actually wants them. For this patch we
> > > should just remove this sentence. We can also correct the
> > > documentation in hwprobe to mention the shortcoming in FD,C,V.
> >
> > I'm not really sure it's all that much of a shortcoming for V or FD,
> > since without the kernel support you shouldn't be using those extensions
> > anyway. A hwprobe thing for that sounds like a footgun to me & I think
> > the current behaviour is how it should be for these extensions.
> > It not being per-cpu is arguably a bug I suppose? But I would contend
> 
> Yeah it was mostly the not being per-cpu I was pointing to in my previous email.
> 
> > that we are conveying support for the extension on a per-hart level,
> > with it then also gated by the kernel supporting V or FD, which is on a
> > system-wide basis.
> > Any other extensions that require Kconfig-gated kernel support should
> > also not report via hwprobe that the extension is supported when the
> > Kconfig option is disabled. It just so happens that the set of
> > extensions that hwprobe supports that are Kconfig-gated and those that
> > require all-hart support are one and the same right now, so we can kinda
> > just conflate the two & use has_vector() et al that handles both
> > kconfig-gating and all-hart support. Until something comes along that needs
> > anything different, I'd leave well enough alone for hwprobe...
> 
> Sounds good.
> 
> >
> > > Palmer, do you want a spin of this patch or a followup on top to
> > > remove the above sentence?
> >
> > It's not actually been applied yet, right?
> >
> > Do you want to have this new thing in cpuinfo tell the user "this hart
> > has xyz extensions that are supported by a kernel, but maybe not this
> > kernel" or to tell the user "this hart has xyz extensions that are
> > supported by this kernel"? Your text above says "understood by the
> > kernel", but I think that's a poor definition that needs to be improved
> > to spell out exactly what you mean. IOW does "understood" mean the
> > kernel will parse them into a structure, or does it mean "yes you can
> > use this extension on this particular hart".
> 
> I'm imagining /proc/cpuinfo being closer to "the CPU has it and the
> kernel at least vaguely understands it, but may not have full support
> for it enabled". I'm assuming /proc/cpuinfo is mostly used by 1)
> humans wanting to know if they have hardware support for a feature,
> and 2) administrators collecting telemetry to manage fleets (ie do I
> have any hardware deployed that supports X). Programmers looking to
> see "is the kernel support for this feature ready right now" would
> ideally not be parsing /proc/cpuinfo text, as more direct mechanisms
> like specific hwprobe bits for "am I fully ready to go" would be
> easier to work with. Feel free to yell at me if this overall vision
> seems flawed.
> 
> I tried to look to see if there was consensus among the other
> architectures. Aarch64 seems to go with "supported and fully enabled",
> as their cpu_has_feature() directly tests elf_hwcap, and elements in
> arm64_elf_hwcaps[] are Kconfig gated. X86 is complicated, but IIRC is
> more along the lines of "hardware has it". They have two macros,
> cpu_has() for "raw capability" and cpu_feature_enabled() for "kernel
> can do it too", and they use cpu_has() for /proc/cpuinfo flags.

I'm fine with the per-cpu stuff meaning "the hardware has it and a kernel,
but not necessarily this one, supports it" - just please make the
documentation clear about it.

Attachment: signature.asc
Description: PGP signature


[Index of Archives]     [Kernel Newbies]     [Security]     [Netfilter]     [Bugtraq]     [Linux FS]     [Yosemite Forum]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Video 4 Linux]     [Device Mapper]     [Linux Resources]

  Powered by Linux