On 2/27/24 18:12, maobibo wrote:
On 2024/2/27 下午5:10, WANG Xuerui wrote:
On 2/27/24 11:14, maobibo wrote:
On 2024/2/27 上午4:02, Jiaxun Yang wrote:
在2024年2月26日二月 上午8:04,maobibo写道:
On 2024/2/26 下午2:12, Huacai Chen wrote:
On Mon, Feb 26, 2024 at 10:04 AM maobibo <maobibo@xxxxxxxxxxx> wrote:
On 2024/2/24 下午5:13, Huacai Chen wrote:
Hi, Bibo,
On Thu, Feb 22, 2024 at 11:28 AM Bibo Mao <maobibo@xxxxxxxxxxx>
wrote:
Instruction cpucfg can be used to get processor features. And
there
is trap exception when it is executed in VM mode, and also it is
to provide cpu features to VM. On real hardware cpucfg area 0 - 20
is used. Here one specified area 0x40000000 -- 0x400000ff is used
for KVM hypervisor to privide PV features, and the area can be
extended
for other hypervisors in future. This area will never be used for
real HW, it is only used by software.
After reading and thinking, I find that the hypercall method
which is
used in our productive kernel is better than this cpucfg method.
Because hypercall is more simple and straightforward, plus we don't
worry about conflicting with the real hardware.
No, I do not think so. cpucfg is simper than hypercall, hypercall
can
be in effect when system runs in guest mode. In some scenario
like TCG
mode, hypercall is illegal intruction, however cpucfg can work.
Nearly all architectures use hypercall except x86 for its historical
Only x86 support multiple hypervisors and there is multiple hypervisor
in x86 only. It is an advantage, not historical reason.
I do believe that all those stuff should not be exposed to guest
user space
for security reasons.
Can you add PLV checking when cpucfg 0x40000000-0x400000FF is
emulated? if it is user mode return value is zero and it is kernel
mode emulated value will be returned. It can avoid information leaking.
I've suggested this approach in another reply [1], but I've rechecked
the manual, and it turns out this behavior is not permitted by the
current wording. See LoongArch Reference Manual v1.10, Volume 1,
Section 2.2.10.5 "CPUCFG":
> CPUCFG 访问未定义的配置字将读回全 0 值。
>
> Reads of undefined CPUCFG configuration words shall return all-zeroes.
This sentence mentions no distinction based on privilege modes, so it
can only mean the behavior applies universally regardless of privilege
modes.
I think if you want to make CPUCFG behavior PLV-dependent, you may
have to ask the LoongArch spec editors, internally or in public, for a
new spec revision.
No, CPUCFG behavior between CPUCFG0-CPUCFG21 is unchanged, only that it
can be defined by software since CPUCFG 0x400000000 is used by software.
The 0x40000000 range is not mentioned in the manuals. I know you've
confirmed privately with HW team but this needs to be properly
documented for public projects to properly rely on.
(There are already multiple third-party LoongArch implementers as of
late 2023, so any ISA-level change like this would best be
coordinated, to minimize surprises.)
With document Vol 4-23
https://www.intel.com/content/dam/develop/external/us/en/documents/335592-sdm-vol-4.pdf
There is one line "MSR address range between 40000000H - 400000FFH is
marked as a specially reserved range. All existing and
future processors will not implement any features using any MSR in this
range."
Thanks for providing this info, now at least we know why it's this
specific range of 0x400000XX that's chosen.
It only says that it is reserved, it does not say detailed software
behavior. Software behavior is defined in hypervisor such as:
https://github.com/MicrosoftDocs/Virtualization-Documentation/blob/main/tlfs/Requirements%20for%20Implementing%20the%20Microsoft%20Hypervisor%20Interface.pdf
https://kb.vmware.com/s/article/1009458
If hypercall method is used, there should be ABI also like aarch64:
https://documentation-service.arm.com/static/6013e5faeee5236980d08619
Yes proper documentation of public API surface is always necessary
*before* doing real work. Because right now the hypercall provider is
Linux KVM, maybe we can document the existing and planned hypercall
usage and ABI in the kernel docs along with code changes.
--
WANG "xen0n" Xuerui
Linux/LoongArch mailing list: https://lore.kernel.org/loongarch/