On 12/7/20 3:16 PM, Christian Borntraeger wrote: > > > On 07.12.20 21:13, Collin Walling wrote: >> On 12/7/20 3:09 PM, Christian Borntraeger wrote: >>> >>> >>> On 07.12.20 21:06, Collin Walling wrote: >>>> On 12/7/20 2:32 PM, Christian Borntraeger wrote: >>>>> On 07.12.20 16:41, Collin Walling wrote: >>>>>> The DIAGNOSE 0x0318 instruction, unique to s390x, is a privileged call >>>>>> that must be intercepted via SIE, handled in userspace, and the >>>>>> information set by the instruction is communicated back to KVM. >>>>>> >>>>>> To test the instruction interception, an ad-hoc handler is defined which >>>>>> simply has a VM execute the instruction and then userspace will extract >>>>>> the necessary info. The handler is defined such that the instruction >>>>>> invocation occurs only once. It is up to the caller to determine how the >>>>>> info returned by this handler should be used. >>>>>> >>>>>> The diag318 info is communicated from userspace to KVM via a sync_regs >>>>>> call. This is tested During a sync_regs test, where the diag318 info is >>>>>> requested via the handler, then the info is stored in the appropriate >>>>>> register in KVM via a sync registers call. >>>>>> >>>>>> If KVM does not support diag318, then the tests will print a message >>>>>> stating that diag318 was skipped, and the asserts will simply test >>>>>> against a value of 0. >>>>>> >>>>>> Signed-off-by: Collin Walling <walling@xxxxxxxxxxxxx> >>>>> >>>>> Interestingly enough, this testcase actually trigger a bug: >>>>> While we gracefully handle this (no crash) >>>>> debugfs: Directory 'kvm-200206' with parent 's390dbf' already present! >>>>> is certainly not ideal.... >>>>> >>>> >>>> Odd... I wonder what triggered this behavior? >>>> >>>> I run my tests with a simple command: >>>> >>>> make summary=0 TARGETS=kvm kselftest >>>> >>>> This must have something to do with spinning up another VM to get the >>>> diag318 data. I think if I have the sync_regs test call the diag handler >>>> first, and then have the sync regs create a VM, that might solve that >>>> issue... >>> >>> Yes, the s390dbf code will try to create a file named kvm-%pid. With >>> 2 VMs the 2nd one fails. Luckily the kvm will be created anyway and >>> also the shutdown seems to be fine, still.... >>> >>>> >>>> May I ask how you encountered this bug so I may replicate in on my end? >>> >>> I just did >>> make TARGETS=kvm selftests >>> >>> and then the error is on dmesg. >>> >> >> Thanks. v5 with fix incoming. > > I think the test is actually fine and we should rather fix the kvm module to > gracefully handle a userspace that starts up 2 or more guests. > Looks like the root cause is within the inode.c file used for the debug filesystem. Essentially, the 2nd VM starts / ends just fine as you mentioned, but doesn't get a dbfs. Since this touches the dbfs related area, and I'm unsure how common this problem is out in the wild, should we ping the kernel list and see if it catches anyone's attention? A first thought would be to append a per-userspace incrementing value to the end of the kvm-%pid part to account for any collisions, but that's up to the folks that know more than I do. -- Regards, Collin Stay safe and stay healthy