On 2019/9/13 19:12, Laurent Dufour wrote: > Le 08/09/2019 à 10:31, zhong jiang a écrit : >> Hi, Laurent, Vinayak >> >> I have got the following crash on 4.14 kernel with speculative page faults enabled. >> Unfortunately, The issue disappears when trying disabling SPF. > > Hi Zhong, > > Sorry for to late answer, I was busy at the LPC. > > I never hit that. > > Is there any steps identified leading to this crash ? > It's strange to me for this situation. The issue doesn't come up recently. I just run testcases in user space. And I do noting. I do know why it disappears. It is alway NULL pointer when the panic comes up. I doesn't see any suspicion from the code. And I try to construct some cases about race between spf path and thread exit. but It fails to recur the issue. Thanks, zhong jiang > Thanks, > Laurent. > > >> The call trace is as follows. >> >> Unable to handle kernel NULL pointer dereference at virtual address 00000000 >> user pgtable: 4k pages, 39-bit VAs, pgd = ffffffc177337000 >> [0000000000000000] *pgd=0000000177346003, *pud=0000000177346003, *pmd=0000000000000000 >> Internal error: Oops: 96000046 [#1] PREEMPT SMP >> >> CPU: 0 PID: 3184 Comm: Signal Catcher VIP: 00 Tainted: G O 4.14.116 #1 >> PC is at __rb_erase_color+0x54/0x260 >> LR is at anon_vma_interval_tree_remove+0x2ac/0x2c0 >> >> Call trace: >> [<ffffff8009aa45c4>] __rb_erase_color+0x54/0x260 >> [<ffffff80083a73f8>] anon_vma_interval_tree_remove+0x2ac/0x2c0 >> [<ffffff80083b96ac>] unlink_anon_vmas+0x84/0x170 >> [<ffffff80083aa8f4>] free_pgtables+0x9c/0x100 >> [<ffffff80083b6814>] exit_mmap+0xb0/0x1d8 >> [<ffffff8008227e8c>] mmput+0x3c/0xe0 >> [ffffff800822ed00>] do_exit+0x2f0/0x954 >> [<ffffff800822f41c>] do_group_exit+0x88/0x9c >> [<ffffff800823b768>] get_signal+0x360/0x56c >> [<ffffff8008208eb8>] do_notify_resume+0x150/0x5e4 >> Exception stack(0xffffffc1eac07ec0 to 0xffffffc1eac08000) >> >> It seems to rb_node is empty accidentally under anon_vma rwsem when the process is exiting. >> I have no idea whether any race existence or not to result in the issue. >> >> Let me know if you have hit the issue or any suggestions. >> >> Thanks, >> zhong jiang >> > > > > . >