On Tue, May 03, 2022 at 06:23:40PM -0400, Qian Cai wrote: > On Tue, Apr 19, 2022 at 12:22:08PM +0100, Mark Brown wrote: > > This series provides initial support for the ARMv9 Scalable Matrix > > Extension (SME). SME takes the approach used for vectors in SVE and > > extends this to provide architectural support for matrix operations. A > > more detailed overview can be found in [1]. > > Set CONFIG_ARM64_SME=n fixed a warning while running libhugetlbfs tests. > > /* > * There are several places where we assume that the order value is sane > * so bail out early if the request is out of bound. > */ > if (unlikely(order >= MAX_ORDER)) { > WARN_ON_ONCE(!(gfp & __GFP_NOWARN)); > return NULL; > } Ugh, right. These variable sized register sets really don't map entirely cleanly onto the ptrace interface but now you point it out what the code has there is going to give a rather larger number than is sensible. Not fully checked but does the below fix things? Thanks for your testing with this stuff, it's been really helpful. diff --git a/arch/arm64/kernel/ptrace.c b/arch/arm64/kernel/ptrace.c index 47d8a7472171..08c1cb43cf33 100644 --- a/arch/arm64/kernel/ptrace.c +++ b/arch/arm64/kernel/ptrace.c @@ -1447,8 +1447,8 @@ static const struct user_regset aarch64_regsets[] = { }, [REGSET_ZA] = { /* SME ZA */ .core_note_type = NT_ARM_ZA, - .n = DIV_ROUND_UP(ZA_PT_ZA_SIZE(SVE_VQ_MAX), SVE_VQ_BYTES), - .size = SVE_VQ_BYTES, + .n = 1, + .size = ZA_PT_SIZE(SVE_VQ_MAX), .align = SVE_VQ_BYTES, .regset_get = za_get, .set = za_set,
Attachment:
signature.asc
Description: PGP signature
_______________________________________________ kvmarm mailing list kvmarm@xxxxxxxxxxxxxxxxxxxxx https://lists.cs.columbia.edu/mailman/listinfo/kvmarm