Re: null-ptr deref found in netfs code

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Dec 5, 2024 at 10:17 AM Shyam Prasad N <nspmangalore@xxxxxxxxx> wrote:
>
> On Thu, Dec 5, 2024 at 2:20 AM ronnie sahlberg <ronniesahlberg@xxxxxxxxx> wrote:
> >
> > If you run the kernel under gdb and set a breakpoint on panic you
> > should be able to see the exact line where it failedand you will also
> > be able to check the variables and what they are set to.
> >
> > On Thu, 5 Dec 2024 at 02:31, Shyam Prasad N <nspmangalore@xxxxxxxxx> wrote:
> > >
> > > On Wed, Dec 4, 2024 at 10:00 PM Shyam Prasad N <nspmangalore@xxxxxxxxx> wrote:
> > > >
> > > > On Tue, Dec 3, 2024 at 4:44 PM David Howells <dhowells@xxxxxxxxxx> wrote:
> > > > >
> > > > > Okay, I think I see the problem.
> > > > >
> > > > > Looking at the following extraction from the trace:
> > > > >
> > > > > > netfs_rreq_ref: R=0000290e NEW         r=1
> > > > > > netfs_read: R=0000290e READAHEAD c=00000000 ni=0 s=85e00000 l=800000 sz=280000000
> > > > > > netfs_folio: i=f1c2900000000000 ix=85e00-85fff read
> > > > > > netfs_folio: i=f1c2900000000000 ix=86000-861ff read
> > > > > > netfs_folio: i=f1c2900000000000 ix=86200-863ff read
> > > > > > netfs_folio: i=f1c2900000000000 ix=86400-865ff read
> > > > >
> > > > > We're requesting reads of four folios, each consisting of 512 pages for a
> > > > > total of 8MiB.
> > > > >
> > > > > > netfs_sreq: R=0000290e[1] DOWN SUBMT f=02 s=85e00000 0/100000 e=0
> > > > > > netfs_sreq: R=0000290e[2] DOWN SUBMT f=02 s=85f00000 0/100000 e=0
> > > > > > netfs_sreq: R=0000290e[3] DOWN SUBMT f=02 s=86000000 0/100000 e=0
> > > > > > netfs_sreq: R=0000290e[4] DOWN SUBMT f=02 s=86100000 0/100000 e=0
> > > > > > netfs_sreq: R=0000290e[5] DOWN SUBMT f=02 s=86200000 0/100000 e=0
> > > > > > netfs_sreq: R=0000290e[6] DOWN SUBMT f=02 s=86300000 0/100000 e=0
> > > > > > netfs_sreq: R=0000290e[7] DOWN SUBMT f=02 s=86400000 0/100000 e=0
> > > > > > netfs_sreq: R=0000290e[8] DOWN SUBMT f=02 s=86500000 0/100000 e=0
> > > > >
> > > > > That got broken down into 8 submissions, each for a 1MiB slice.
> > > > >
> > > > > > netfs_sreq: R=0000290e[1] DOWN IO    f=02 s=85e00000 100000/100000 e=0
> > > > > > netfs_progress: R=0000290e[01] s=85e00000 ct=0/100000 pa=100000/100000 sl=0
> > > > > > netfs_donate: R=0000290e[01] -> [02] to-next am=100000
> > > > >
> > > > > Subrequest 1 completed, but wasn't large enough to cover a whole folio, so it
> > > > > donated its coverage forwards to subreq 2.
> > > > >
> > > > > > netfs_sreq: R=0000290e[6] DOWN IO    f=02 s=86300000 100000/100000 e=0
> > > > > > netfs_progress: R=0000290e[06] s=86300000 ct=0/100000 pa=100000/100000 sl=2
> > > > > > netfs_donate: R=0000290e[06] -> [05] tail-to-prev am=100000
> > > > >
> > > > > Subrequest 6 completed, but wasn't large enough to cover a whole folio, so it
> > > > > donated its coverage backwards to subreq 5.
> > > > >
> > > > > > netfs_sreq: R=0000290e[2] DOWN IO    f=02 s=85f00000 100000/100000 e=0
> > > > > > netfs_progress: R=0000290e[02] s=85e00000 ct=0/200000 pa=200000/200000 sl=0
> > > > > > netfs_folio: i=f1c2900000000000 ix=85e00-85fff read-done
> > > > > > netfs_folio: i=f1c2900000000000 ix=85e00-85fff read-unlock
> > > > >
> > > > > Subreq 2 completed, and with the donation from subreq 1, had sufficient to
> > > > > unlock the first folio.
> > > > >
> > > > > > netfs_sreq: R=0000290e[5] DOWN IO    f=02 s=86200000 100000/100000 e=0
> > > > > > netfs_progress: R=0000290e[05] s=86200000 ct=0/200000 pa=200000/200000 sl=2
> > > > > > netfs_folio: i=f1c2900000000000 ix=86200-863ff read-done
> > > > > > netfs_folio: i=f1c2900000000000 ix=86200-863ff read-unlock
> > > > >
> > > > > Subreq 5 completed, and with the donation from subreq 6, had sufficient to
> > > > > unlock the third folio.
> > > > >
> > > > > > netfs_sreq: R=0000290e[3] DOWN IO    f=02 s=86000000 100000/100000 e=0
> > > > > > netfs_progress: R=0000290e[03] s=86000000 ct=0/100000 pa=100000/100000 sl=1
> > > > > > netfs_donate: R=0000290e[03] -> [04] to-next am=100000
> > > > >
> > > > > Subrequest 3 completed, but wasn't large enough to cover a whole folio, so it
> > > > > donated its coverage forwards to subreq 4.  So far, so good.
> > > > >
> > > > > > netfs_sreq: R=0000290e[7] DOWN IO    f=02 s=86400000 100000/100000 e=0
> > > > > > netfs_progress: R=0000290e[07] s=86400000 ct=0/100000 pa=100000/100000 sl=3
> > > > > > netfs_donate: R=0000290e[07] -> [04] to-prev am=0
> > > > >
> > > > > Subreq 7 completed, but wasn't large enough to cover a whole folio, so it
> > > > > donated its coverage backwards to subreq 4.  This is a bug as subreq 7 is not
> > > > > contiguous with subreq 4.  It should instead have donated forwards to subreq
> > > > > 8.
> > > > >
> > > > > > netfs_sreq: R=0000290e[4] DOWN IO    f=02 s=86100000 100000/100000 e=0
> > > > > > netfs_sreq: R=0000290e[4] DOWN +DON  f=02 s=86000000 300000/300000 e=0
> > > > > > netfs_progress: R=0000290e[04] s=86000000 ct=0/300000 pa=200000/300000 sl=1
> > > > > > netfs_folio: i=f1c2900000000000 ix=86000-861ff read-done
> > > > > > netfs_folio: i=f1c2900000000000 ix=86000-861ff read-unlock
> > > > >
> > > > > Subreq 4 completed, and with the donation from subreq 2, had sufficient to
> > > > > unlock the second folio.  However, it also has some excess from subreq 7 that
> > > > > it can't do anything with - and this gets lost.
> > > > >
> > > > > > netfs_sreq: R=0000290e[8] DOWN IO    f=02 s=86500000 100000/100000 e=0
> > > > > > netfs_progress: R=0000290e[08] s=86500000 ct=0/100000 pa=100000/100000 sl=3
> > > > > > netfs_donate: R=0000290e[08] -> [04] tail-to-prev am=100000
> > > > >
> > > > > Here's a repeat of the bug: subreq 8 donates to subreq 4, but, again, is not
> > > > > contiguous.  As these are happening concurrently, the other thread hasn't
> > > > > delisted subreq 4 yet.
> > > > >
> > > > > > netfs_sreq: R=0000290e[4] DOWN +DON  f=02 s=86000000 400000/400000 e=0
> > > > > > netfs_progress: R=0000290e[04] s=86200000 ct=200000/400000 pa=200000/200000 sl=2
> > > > >
> > > > > The request screwed at this point: subreq 4 shows the extra stuff it has been
> > > > > donated, but it is unable to do anything with it.  There is no folio to
> > > > > wrangle as the third slot (sl=2) in the queue has already been cleared.
> > > > >
> > > > > (Note that this bug shouldn't happen with the patches currently on my
> > > > > netfs-writeback branch as I got rid of the donation mechanism in preference
> > > > > for something simpler with single-threaded collection.)
> > > > >
> > > > > David
> > > > >
> > > >
> > > > Hi David,
> > > >
> > > > Tried your submitted patch "netfs: Fix non-contiguous donation between
> > > > completed reads" with the same workload.
> > > > It seems to be crashing elsewhere. I'm trying to get the OOPS message
> > > > and ftrace, but it seems to be crashing the kernel.
> > > >
> > > > --
> > > > Regards,
> > > > Shyam
> > >
> > > Here's what I managed to get before the VM crashed:
> > >
> > > [Wed Dec  4 16:27:10 2024] BUG: kernel NULL pointer dereference,
> > > address: 0000000000000068
> > > [Wed Dec  4 16:27:10 2024] #PF: supervisor read access in kernel mode
> > > [Wed Dec  4 16:27:10 2024] #PF: error_code(0x0000) - not-present page
> > > [Wed Dec  4 16:27:10 2024] PGD 0 P4D 0
> > > [Wed Dec  4 16:27:10 2024] Oops: Oops: 0000 [#1] SMP PTI
> > > [Wed Dec  4 16:27:10 2024] CPU: 6 UID: 0 PID: 1263 Comm: kworker/6:3
> > > Tainted: G           OE      6.12.0-mainline #10
> > > [Wed Dec  4 16:27:10 2024] Tainted: [O]=OOT_MODULE, [E]=UNSIGNED_MODULE
> > > [Wed Dec  4 16:27:10 2024] Hardware name: Microsoft Corporation
> > > Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1
> > > 08/23/2024
> > > [Wed Dec  4 16:27:10 2024] Workqueue: cifsiod smb2_readv_worker [cifs]
> > > [Wed Dec  4 16:27:10 2024] RIP:
> > > 0010:netfs_consume_read_data.isra.0+0x2d6/0xbe0 [netfs]
> > > [Wed Dec  4 16:27:10 2024] Code: 48 8b 43 78 4c 8b 7d c0 4c 8b 43 68
> > > 48 85 c0 0f 85 76 05 00 00 48 8b 55 90 48 8b 73 30 48 83 c2 70 48 39
> > > d6 74 16 48 8b 7d 88 <48> 8b 4f 68 48 03 4f 60 48 39 4b 60 0f 84 58 06
> > > 00 00 49 29 c0 48
> > > [Wed Dec  4 16:27:10 2024] RSP: 0018:ffffad4a4582bd98 EFLAGS: 00010283
> > > [Wed Dec  4 16:27:10 2024] RAX: 0000000000000000 RBX: ffff9810d6ff6280
> > > RCX: 0000000000100000
> > > [Wed Dec  4 16:27:10 2024] RDX: ffff981154071eb0 RSI: ffff9810d6ff71a8
> > > RDI: 0000000000000000
> > > [Wed Dec  4 16:27:10 2024] workqueue: smb2_readv_worker [cifs] hogged
> > > CPU for >10000us 4 times, consider switching to WQ_UNBOUND
> > > [Wed Dec  4 16:27:10 2024] RBP: ffffad4a4582be10 R08: 0000000000100000
> > > R09: 0000000000000000
> > > [Wed Dec  4 16:27:10 2024] R10: 0000000000000008 R11: fefefefefefefeff
> > > R12: 0000000000000000
> > > [Wed Dec  4 16:27:10 2024] workqueue: smb2_readv_worker [cifs] hogged
> > > CPU for >10000us 5 times, consider switching to WQ_UNBOUND
> > > [Wed Dec  4 16:27:10 2024] workqueue: smb2_readv_worker [cifs] hogged
> > > CPU for >10000us 7 times, consider switching to WQ_UNBOUND
> > > [Wed Dec  4 16:27:10 2024] R13: ffff981154072028 R14: 0000000000200000
> > > R15: ffff981154072028
> > > [Wed Dec  4 16:27:10 2024] workqueue: smb2_readv_worker [cifs] hogged
> > > CPU for >10000us 11 times, consider switching to WQ_UNBOUND
> > >
> > >
> > > --
> > > Regards,
> > > Shyam
> > >
>
> Ack. Will check further on this today.
>
> --
> Regards,
> Shyam

Hi David,

Here's the output that I get with your patch. This null-ptr deref
crashes the kernel. Even with KASAN enabled, I do not see anything
significantly different:
[Thu Dec  5 09:55:18 2024] Oops: general protection fault, probably
for non-canonical address 0xdffffc000000000c: 0000 [#1] SMP KASAN PTI
[Thu Dec  5 09:55:18 2024] KASAN: null-ptr-deref in range
[0x0000000000000060-0x0000000000000067]
[Thu Dec  5 09:55:18 2024] CPU: 12 UID: 0 PID: 175 Comm: kworker/12:1
Not tainted 6.13.0-rc1-wkasan #12
[Thu Dec  5 09:55:18 2024] Hardware name: Microsoft Corporation
Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1
08/23/2024
[Thu Dec  5 09:55:18 2024] Workqueue: cifsiod smb2_readv_worker [cifs]
[Thu Dec  5 09:55:18 2024] RIP:
0010:netfs_consume_read_data.isra.0+0x74e/0x2a80 [netfs]
[Thu Dec  5 09:55:18 2024] Code: 80 3c 02 00 0f 85 aa 20 00 00 48 8b
85 38 ff ff ff 49 8b 4d 60 48 8d 78 60 48 b8 00 00 00 00 00 fc ff df
48 89 fa 48 c1 ea 03 <80> 3c 02 00 0f 85 0f 21 00 00 48 8b 85 38 ff ff
ff 48 8d 78 68 4c
[Thu Dec  5 09:55:18 2024] RSP: 0018:ffff8881039cfc10 EFLAGS: 00010206
[Thu Dec  5 09:55:18 2024] RAX: dffffc0000000000 RBX: 0000000000100000
RCX: 0000000002400000
[Thu Dec  5 09:55:18 2024] RDX: 000000000000000c RSI: 0000000000000000
RDI: 0000000000000060
[Thu Dec  5 09:55:18 2024] RBP: ffff8881039cfcf0 R08: 0000000000000001
R09: ffffed1020739f76
[Thu Dec  5 09:55:18 2024] R10: 0000000000000003 R11: fefefefefefefeff
R12: ffff88815fcb8c28
[Thu Dec  5 09:55:18 2024] R13: ffff88815fcb8d80 R14: ffff88813ce6fbf0
R15: ffff88813ce6fd68
[Thu Dec  5 09:55:18 2024] FS:  0000000000000000(0000)
GS:ffff889fb2e00000(0000) knlGS:0000000000000000
[Thu Dec  5 09:55:18 2024] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[Thu Dec  5 09:55:18 2024] CR2: 0000791b35f99018 CR3: 0000000189996002
CR4: 00000000003706f0
[Thu Dec  5 09:55:18 2024] DR0: 0000000000000000 DR1: 0000000000000000
DR2: 0000000000000000
[Thu Dec  5 09:55:18 2024] DR3: 0000000000000000 DR6: 00000000fffe0ff0
DR7: 0000000000000400
[Thu Dec  5 09:55:18 2024] Call Trace:
[Thu Dec  5 09:55:18 2024]  <TASK>
[Thu Dec  5 09:55:18 2024]  ? show_regs+0x64/0x70
[Thu Dec  5 09:55:18 2024]  ? die_addr+0x41/0xb0
[Thu Dec  5 09:55:18 2024]  ? exc_general_protection+0x163/0x250
[Thu Dec  5 09:55:18 2024]  ? asm_exc_general_protection+0x27/0x30
[Thu Dec  5 09:55:18 2024]  ?
netfs_consume_read_data.isra.0+0x74e/0x2a80 [netfs]
[Thu Dec  5 09:55:18 2024]  netfs_read_subreq_terminated+0x928/0xec0 [netfs]
[Thu Dec  5 09:55:18 2024]  ? __pfx___schedule+0x10/0x10
[Thu Dec  5 09:55:18 2024]  ? __pfx__raw_spin_lock_irq+0x10/0x10
[Thu Dec  5 09:55:18 2024]  smb2_readv_worker+0x4b/0x60 [cifs]
[Thu Dec  5 09:55:18 2024]  process_one_work+0x5f3/0xe00
[Thu Dec  5 09:55:18 2024]  ? __kasan_check_write+0x14/0x20
[Thu Dec  5 09:55:18 2024]  worker_thread+0x87c/0x1540
[Thu Dec  5 09:55:18 2024]  ? _raw_spin_lock_irqsave+0x81/0xe0
[Thu Dec  5 09:55:18 2024]  ? __pfx__raw_spin_lock_irqsave+0x10/0x10
[Thu Dec  5 09:55:18 2024]  ? __pfx_worker_thread+0x10/0x10
[Thu Dec  5 09:55:18 2024]  kthread+0x2a2/0x370
[Thu Dec  5 09:55:18 2024]  ? __pfx_kthread+0x10/0x10
[Thu Dec  5 09:55:18 2024]  ret_from_fork+0x3d/0x80
[Thu Dec  5 09:55:18 2024]  ? __pfx_kthread+0x10/0x10
[Thu Dec  5 09:55:18 2024]  ret_from_fork_asm+0x1a/0x30
[Thu Dec  5 09:55:18 2024]  </TASK>
[Thu Dec  5 09:55:18 2024] Modules linked in: cmac nls_utf8 cifs
cifs_arc4 nls_ucs2_utils cifs_md4 netfs mptcp_diag xsk_diag raw_diag
unix_diag af_packet_diag netlink_diag udp_diag tcp_diag inet_diag qrtr
cfg80211 8021q garp mrp stp llc xt_conntrack nf_conntrack
nf_defrag_ipv6 nf_defrag_ipv4 xt_owner xt_tcpudp nft_compat nf_tables
binfmt_misc mlx5_ib ib_uverbs macsec ib_core nls_iso8859_1
intel_rapl_msr intel_rapl_common intel_uncore_frequency_common btrfs
isst_if_common blake2b_generic xor mlx5_core mlxfw psample tls joydev
mac_hid serio_raw raid6_pq hid_generic skx_edac_common libcrc32c
crct10dif_pclmul crc32_pclmul polyval_clmulni polyval_generic
ghash_clmulni_intel sha256_ssse3 sha1_ssse3 aesni_intel crypto_simd
cryptd rapl hid_hyperv hyperv_drm vmgenid hyperv_keyboard hid
hv_netvsc hyperv_fb sch_fq_codel dm_multipath msr nvme_fabrics
efi_pstore nfnetlink ip_tables x_tables autofs4
[Thu Dec  5 09:55:18 2024] ---[ end trace 0000000000000000 ]---

RIP shows as the changed line with the patch:
(gdb) l *(netfs_consume_read_data+0x74e)
0x228be is in netfs_consume_read_data (fs/netfs/read_collect.c:260).
255              * donation and should be able to unlock folios and/or
donate nextwards.
256              */
257             if (!subreq->consumed &&
258                 !prev_donated &&
259                 !list_is_first(&subreq->rreq_link, &rreq->subrequests) &&
260                 subreq->start == prev->start + prev->len) {
<<<<<<<<<<<<<<<<<<<
261                     prev = list_prev_entry(subreq, rreq_link);
262                     WRITE_ONCE(prev->next_donated,
prev->next_donated + subreq->len);
263                     subreq->start += subreq->len;
264                     subreq->len = 0;


-- 
Regards,
Shyam





[Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux