Re: fscache/cachefiles reference counting bug?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Lei/Daniel,

There is one additional thing I had done to fix the issue.

struct fscache_retrieval *op = monitor->op;

The problem is the monitor is freed between the enqueue and put operation,
leading to the signature you are seeing.

I was able to reproduce consistently by adding a delay in the thread to
simulate a race.


Thanks
Kiran



>From 2a7b273b28b89887dba4d6a498f7c667ec9cde46 Mon Sep 17 00:00:00 2001
From: kiran modukuri <kiran.modukuri@xxxxxxxxx>
Date: Tue, 18 Jul 2017 16:25:49 -0700
Subject: [PATCH] Fscache: Fix assertin crash FS-Cache: 6 == 5 is false ,
 kernel BUG at /build/linux-lts-xenial-4.4.0/fs/fscache/operation.c:494!

---
 fs/cachefiles/rdwr.c   | 15 ++++++++++++---
 fs/fscache/operation.c |  5 +++--
 2 files changed, 15 insertions(+), 5 deletions(-)

diff --git a/fs/cachefiles/rdwr.c b/fs/cachefiles/rdwr.c
index c0f3da3..87f81d3 100644
--- a/fs/cachefiles/rdwr.c
+++ b/fs/cachefiles/rdwr.c
@@ -29,7 +29,7 @@ static int cachefiles_read_waiter(wait_queue_t *wait,
unsigned mode,
     struct cachefiles_object *object;
     struct wait_bit_key *key = _key;
     struct page *page = wait->private;
-
+    struct fscache_retrieval *op = monitor->op;
     ASSERT(key);

     _enter("{%lu},%u,%d,{%p,%u}",
@@ -56,11 +56,20 @@ static int cachefiles_read_waiter(wait_queue_t *wait,
unsigned mode,
     object = container_of(monitor->op->op.object,
                   struct cachefiles_object, fscache);

+        /* bump the usage, because cachefiles_read_copier can free the
+         * op once monitor is enqueued on the op->to_do list even before
+         * the op is  enqueued on the fscache_op_wq
+         */
+        fscache_get_retrieval(op);
     spin_lock(&object->work_lock);
     list_add_tail(&monitor->op_link, &monitor->op->to_do);
     spin_unlock(&object->work_lock);
-
-    fscache_enqueue_retrieval(monitor->op);
+        /* monitor can be freed after its enqueued and we leave the lock *
+         * alternatively we could enqueue under the object_work lock,
+         *  but that would have more serialization.
+         */
+    fscache_enqueue_retrieval(op);
+        fscache_put_retrieval(op);
     return 0;
 }

diff --git a/fs/fscache/operation.c b/fs/fscache/operation.c
index de67745..24dfe5b 100644
--- a/fs/fscache/operation.c
+++ b/fs/fscache/operation.c
@@ -66,7 +66,8 @@ void fscache_enqueue_operation(struct fscache_operation
*op)
     ASSERT(op->processor != NULL);
     ASSERT(fscache_object_is_available(op->object));
     ASSERTCMP(atomic_read(&op->usage), >, 0);
-    ASSERTCMP(op->state, ==, FSCACHE_OP_ST_IN_PROGRESS);
+    ASSERTIFCMP(op->state != FSCACHE_OP_ST_IN_PROGRESS,
+            op->state, ==,  FSCACHE_OP_ST_CANCELLED);

     fscache_stat(&fscache_n_op_enqueue);
     switch (op->flags & FSCACHE_OP_TYPE) {
@@ -481,7 +482,7 @@ void fscache_put_operation(struct fscache_operation *op)
     struct fscache_cache *cache;

     _enter("{OBJ%x OP%x,%d}",
-           op->object->debug_id, op->debug_id, atomic_read(&op->usage));
+           (op->object?op->object->debug_id:0), op->debug_id,
atomic_read(&op->usage));

     ASSERTCMP(atomic_read(&op->usage), >, 0);

-- 
2.7.4







On Fri, Apr 6, 2018 at 6:11 PM, Lei Xue <xuelei@xxxxxxxxxx> wrote:

> Hello Daniel,
>
> I have another reference count issue, and have a patch for it.  Although
> they have differenct panic stack, I think they have same root cause.
> https://www.redhat.com/archives/linux-cachefs/2018-February/msg00000.html
>
> I did try the patch of
> https://www.redhat.com/archives/linux-cachefs/2017-June/msg00002.html,
> but it looks like doesn't work on my environment.
>
> Thanks,
> -Lei
>
>
> 在 06/04/2018 7:16 PM, Daniel Axtens 写道:
> > Hi,
> >
> > We have a user reporting BUG splats and FSCache assertion failures with
> > FSCache and NFS. The full stack traces are below.
> >
> > We are consistently seeing the following FSCache assertion:
> >
> > [81738.886634] FS-Cache:
> > [81738.888281] FS-Cache: Assertion failed
> > [81738.889461] FS-Cache: 6 == 5 is false
> > [81738.890625] ------------[ cut here ]------------
> > [81738.891706] kernel BUG at /build/linux-hVVhWi/linux-4.4.
> 0/fs/fscache/operation.c:494!
> >
> > which references the following:
> >
> >       ASSERTIFCMP(op->state != FSCACHE_OP_ST_INITIALISED &&
> >                   op->state != FSCACHE_OP_ST_COMPLETE,
> >                   op->state, ==, FSCACHE_OP_ST_CANCELLED);
> >
> > So the operation has become DEAD when it was expected to be CANCELLED.
> >
> > fscache_put_operation() is the only thing I can find that will mark an
> > operation as DEAD, and then only when atomic_dec_and_test(&op->usage)
> > goes to 0 twice.
> >
> > This looks supsiciously like this bug reported in June 2017:
> > https://www.redhat.com/archives/linux-cachefs/2017-June/msg00002.html
> > "Re:  Problem: Race in fscache operation enqueuing for
> > reading and copying multiple pages from cachefiles to netfs causes
> > kernel oops or at worst memory scribbling."
> >
> > Indeed, immediatlely prior to the assertion failure, we sometimes see a
> > WARN in the workqueue code from trying to insert a work that is not
> > pending, and the stack trace includes cachefiles_read_waiter calling
> > fscache_enqueue_operation.
> >
> > Was there ever any progress on that bug? Or is there anything else I
> > should be looking at?
> >
> > Thanks in advance!
> >
> > Regards,
> > Daniel
> >
> >
> > Full stack traces:
> >
> > [81738.885081] ------------[ cut here ]------------
> > [81738.885165] WARNING: CPU: 67 PID: 70869 at
> /build/linux-hVVhWi/linux-4.4.0/kernel/workqueue.c:623
> insert_work+0xac/0xc0()
> > [81738.885175] Modules linked in: binfmt_misc veth ipt_MASQUERADE
> nf_nat_masquerade_ipv4 nf_conntrack_netlink nfnetlink xfrm_user xfrm_algo
> iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 xt_addrtype
> xt_conntrack nf_nat nf_conntrack br_netfilter bridge stp llc overlay
> xt_multiport iptable_filter ip_tables x_tables nfsv3 nfs_acl
> rpcsec_gss_krb5 auth_rpcgss nfsv4 nfs lockd grace nv_peer_mem(OE)
> cachefiles fscache msr rdma_ucm(OE) ib_ucm(OE) rdma_cm(OE) iw_cm(OE)
> configfs ib_ipoib(OE) ib_cm(OE) ib_uverbs(OE) ib_umad(OE) mlx5_ib(OE)
> mlx4_ib(OE) ib_core(OE) mlx4_en(OE) mlx4_core(OE) bonding ipmi_ssif
> intel_rapl x86_pkg_temp_thermal intel_powerclamp coretemp kvm_intel kvm
> irqbypass crct10dif_pclmul crc32_pclmul ghash_clmulni_intel aesni_intel
> aes_x86_64 lrw gf128mul glue_helper ablk_helper cryptd joydev input_leds
> mxm_wmi sb_edac edac_core lpc_ich mei_me mei nvidia_uvm(POE) shpchp
> 8250_fintek ipmi_si wmi acpi_power_meter mac_hid sunrpc ipmi_devintf
> ipmi_msghandler knem(OE) autofs4
> >   nvidia_drm(POE) nvidia_modeset(POE) ast i2c_algo_bit nvidia(POE) ttm
> ixgbe drm_kms_helper syscopyarea ses dca sysfillrect mlx5_core(OE)
> enclosure sysimgblt fb_sys_fops vxlan hid_generic ip6_udp_tunnel usbhid
> udp_tunnel mlx_compat(OE) hid ahci ptp megaraid_sas drm libahci pps_core
> mdio fjes
> > [81738.885422] CPU: 67 PID: 70869 Comm: xxxxxx Tainted: P           OE
>  4.4.0-92-generic #115-Ubuntu
> > [81738.885428] Hardware name: xxxxx
> > [81738.885431]  0000000000000086 6bb3b89857e8c7df ffff887f7f2c3b60
> ffffffff813f9c83
> > [81738.885436]  0000000000000000 ffffffff81cbb498 ffff887f7f2c3b98
> ffffffff81081312
> > [81738.885439]  ffff887f22fdd200 ffff887f626bc540 ffff887f656e8000
> 0000000000000005
> > [81738.885446] Call Trace:
> > [81738.885452]  <IRQ>  [<ffffffff813f9c83>] dump_stack+0x63/0x90
> > [81738.885512]  [<ffffffff81081312>] warn_slowpath_common+0x82/0xc0
> > [81738.885535]  [<ffffffff8108145a>] warn_slowpath_null+0x1a/0x20
> > [81738.885558]  [<ffffffff8109869c>] insert_work+0xac/0xc0
> > [81738.885571]  [<ffffffff810987e9>] __queue_work+0x139/0x3b0
> > [81738.885579]  [<ffffffff81098c77>] queue_work_on+0x27/0x40
> > [81738.885625]  [<ffffffffc054b064>] fscache_enqueue_operation+0x94/0x150
> [fscache]
> > [81738.885640]  [<ffffffffc03b68ec>] cachefiles_read_waiter+0xbc/0x100
> [cachefiles]
> > [81738.885666]  [<ffffffff810c3d12>] __wake_up_common+0x52/0x90
> > [81738.885694]  [<ffffffff810c3d89>] __wake_up+0x39/0x50
> > [81738.885715]  [<ffffffff810c4040>] __wake_up_bit+0x50/0x70
> > [81738.885726]  [<ffffffff8118f739>] unlock_page+0x69/0x70
> > [81738.885746]  [<ffffffff812e8073>] mpage_end_io+0x33/0xc0
> > [81738.885770]  [<ffffffff813c28af>] bio_endio+0x3f/0x60
> > [81738.885782]  [<ffffffff813ca257>] blk_update_request+0x87/0x310
> > [81738.885801]  [<ffffffff815c40b3>] scsi_end_request+0x33/0x1d0
> > [81738.885815]  [<ffffffff815c77e6>] scsi_io_completion+0x1b6/0x690
> > [81738.885841]  [<ffffffff815be40f>] scsi_finish_command+0xcf/0x120
> > [81738.885858]  [<ffffffff815c6f74>] scsi_softirq_done+0x124/0x150
> > [81738.885897]  [<ffffffff813d2147>] blk_done_softirq+0x87/0xb0
> > [81738.885916]  [<ffffffff81085df1>] __do_softirq+0x101/0x290
> > [81738.885933]  [<ffffffff810860f3>] irq_exit+0xa3/0xb0
> > [81738.885950]  [<ffffffff818449e4>] do_IRQ+0x54/0xd0
> > [81738.885975]  [<ffffffff81842ac2>] common_interrupt+0x82/0x82
> > [81738.885982]  <EOI>  [<ffffffff811fe87c>] ? try_charge+0x3c/0x660
> > [81738.886035]  [<ffffffff810c4125>] ? wake_up_bit+0x25/0x30
> > [81738.886057]  [<ffffffff81202ceb>] mem_cgroup_try_charge+0x6b/0x1e0
> > [81738.886074]  [<ffffffff8118fc40>] __add_to_page_cache_locked+
> 0x60/0x1f0
> > [81738.886194]  [<ffffffffc141cd83>] ? nfs_pageio_add_request+0xa3/0x1b0
> [nfs]
> > [81738.886284]  [<ffffffffc141d760>] ? nfs_read_completion+0x210/0x210
> [nfs]
> > [81738.886299]  [<ffffffff8118fe27>] add_to_page_cache_lru+0x37/0x90
> > [81738.886316]  [<ffffffff8119df5b>] read_cache_pages+0x7b/0x120
> > [81738.886401]  [<ffffffffc141de79>] nfs_readpages+0x129/0x200 [nfs]
> > [81738.886429]  [<ffffffff8119e1e6>] __do_page_cache_readahead+
> 0x196/0x230
> > [81738.886454]  [<ffffffff8119e3bd>] ondemand_readahead+0x13d/0x250
> > [81738.886469]  [<ffffffff8119e53b>] page_cache_async_readahead+
> 0x6b/0x70
> > [81738.886478]  [<ffffffff81190dd8>] generic_file_read_iter+0x458/0x670
> > [81738.886529]  [<ffffffffc1415292>] ? __nfs_revalidate_mapping+0xc2/0x280
> [nfs]
> > [81738.886564]  [<ffffffffc1410c72>] nfs_file_read+0x52/0xa0 [nfs]
> > [81738.886577]  [<ffffffff8120eec4>] new_sync_read+0x94/0xd0
> > [81738.886584]  [<ffffffff8120ef26>] __vfs_read+0x26/0x40
> > [81738.886589]  [<ffffffff8120f4e6>] vfs_read+0x86/0x130
> > [81738.886595]  [<ffffffff81210235>] SyS_read+0x55/0xc0
> > [81738.886606]  [<ffffffff81841ff2>] entry_SYSCALL_64_fastpath+0x16/0x71
> > [81738.886612] ---[ end trace dbbc7c60b0e5f2a3 ]---
> >
> > [81738.886634] FS-Cache:
> > [81738.888281] FS-Cache: Assertion failed
> > [81738.889461] FS-Cache: 6 == 5 is false
> > [81738.890625] ------------[ cut here ]------------
> > [81738.891706] kernel BUG at /build/linux-hVVhWi/linux-4.4.
> 0/fs/fscache/operation.c:494!
> > [81738.892821] invalid opcode: 0000 [#1] SMP
> > [81738.893930] Modules linked in: binfmt_misc veth ipt_MASQUERADE
> nf_nat_masquerade_ipv4 nf_conntrack_netlink nfnetlink xfrm_user xfrm_algo
> iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 xt_addrtype
> xt_conntrack nf_nat nf_conntrack br_netfilter bridge stp llc overlay
> xt_multiport iptable_filter ip_tables x_tables nfsv3 nfs_acl
> rpcsec_gss_krb5 auth_rpcgss nfsv4 nfs lockd grace nv_peer_mem(OE)
> cachefiles fscache msr rdma_ucm(OE) ib_ucm(OE) rdma_cm(OE) iw_cm(OE)
> configfs ib_ipoib(OE) ib_cm(OE) ib_uverbs(OE) ib_umad(OE) mlx5_ib(OE)
> mlx4_ib(OE) ib_core(OE) mlx4_en(OE) mlx4_core(OE) bonding ipmi_ssif
> intel_rapl x86_pkg_temp_thermal intel_powerclamp coretemp kvm_intel kvm
> irqbypass crct10dif_pclmul crc32_pclmul ghash_clmulni_intel aesni_intel
> aes_x86_64 lrw gf128mul glue_helper ablk_helper cryptd joydev input_leds
> mxm_wmi sb_edac edac_core lpc_ich mei_me mei nvidia_uvm(POE) shpchp
> 8250_fintek ipmi_si wmi acpi_power_meter mac_hid sunrpc ipmi_devintf
> ipmi_msghandler knem(OE) autofs4
> >   nvidia_drm(POE) nvidia_modeset(POE) ast i2c_algo_bit nvidia(POE) ttm
> ixgbe drm_kms_helper syscopyarea ses dca sysfillrect mlx5_core(OE)
> enclosure sysimgblt fb_sys_fops vxlan hid_generic ip6_udp_tunnel usbhid
> udp_tunnel mlx_compat(OE) hid ahci ptp megaraid_sas drm libahci pps_core
> mdio fjes
> > [81738.908228] CPU: 23 PID: 36247 Comm: kworker/u162:0 Tainted: P
> W  OE   4.4.0-92-generic #115-Ubuntu
> > [81738.910190] Hardware name: xxxxx
> > [81738.912691] Workqueue: fscache_operation fscache_op_work_func
> [fscache]
> > [81738.914032] task: ffff8846c0f23800 ti: ffff886cf2f0c000 task.ti:
> ffff886cf2f0c000
> > [81738.914880] RIP: 0010:[<ffffffffc054af96>]  [<ffffffffc054af96>]
> fscache_put_operation+0x1d6/0x210 [fscache]
> > [81738.915788] RSP: 0000:ffff886cf2f0fde0  EFLAGS: 00010282
> > [81738.916700] RAX: 0000000000000019 RBX: ffff887f626bc540 RCX:
> 0000000000000006
> > [81738.917635] RDX: 0000000000000000 RSI: 0000000000000246 RDI:
> ffff887f7eccdd50
> > [81738.918571] RBP: ffff886cf2f0fdf8 R08: 0000000000000000 R09:
> 000000000000079e
> > [81738.919539] R10: 0000000000000002 R11: 000000000000079e R12:
> ffff887f656e8000
> > [81738.920454] R13: ffff887f22fdd200 R14: 0000000000001440 R15:
> ffff887f626bc540
> > [81738.921275] FS:  0000000000000000(0000) GS:ffff887f7ecc0000(0000)
> knlGS:0000000000000000
> > [81738.922089] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> > [81738.922922] CR2: 00007ff2cce7e880 CR3: 0000000002e0a000 CR4:
> 00000000003406e0
> > [81738.923765] DR0: 0000000000000000 DR1: 0000000000000000 DR2:
> 0000000000000000
> > [81738.924650] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7:
> 0000000000000400
> > [81738.925517] Stack:
> > [81738.926320]  ffff887f626bc540 ffff887f656e8000 ffff887f22fdd200
> ffff886cf2f0fe10
> > [81738.927155]  ffffffffc054b1ca ffff887f608c3ec0 ffff886cf2f0fe50
> ffffffff8109a625
> > [81738.928059]  0000000000000000 ffff887f608c3ef0 ffff887f656e8000
> 0000000000000088
> > [81738.928961] Call Trace:
> > [81738.929825]  [<ffffffffc054b1ca>] fscache_op_work_func+0x2a/0x50
> [fscache]
> > [81738.930680]  [<ffffffff8109a625>] process_one_work+0x165/0x480
> > [81738.931547]  [<ffffffff8109a98b>] worker_thread+0x4b/0x4c0
> > [81738.932427]  [<ffffffff8109a940>] ? process_one_work+0x480/0x480
> > [81738.933506]  [<ffffffff810a0cc5>] kthread+0xe5/0x100
> > [81738.934498]  [<ffffffff810a0be0>] ? kthread_create_on_node+0x1e0/
> 0x1e0
> > [81738.935551]  [<ffffffff8184238f>] ret_from_fork+0x3f/0x70
> > [81738.936519]  [<ffffffff810a0be0>] ? kthread_create_on_node+0x1e0/
> 0x1e0
> > [81738.937372] Code: c7 c7 fb 02 55 c0 e8 e1 2b c4 c0 48 c7 c7 09 03 55
> c0 e8 d5 2b c4 c0 8b 73 40 ba 05 00 00 00 48 c7 c7 b0 ee 54 c0 e8 c1 2b c4
> c0 <0f> 0b 48 c7 c7 fb 02 55 c0 e8 b3 2b c4 c0 48 c7 c7 09 03 55 c0
> > [81738.939258] RIP  [<ffffffffc054af96>] fscache_put_operation+0x1d6/0x210
> [fscache]
> > [81738.940185]  RSP <ffff886cf2f0fde0>
> > [81738.944238] ---[ end trace dbbc7c60b0e5f2a4 ]---
> >
> > (this usually follows, it just looks like a cascading failure from
> > messing with the work queue data structures)
> >
> > [81738.946803] BUG: unable to handle kernel paging request at
> ffffffffffffffd8
> > [81738.947768] IP: [<ffffffff810a1370>] kthread_data+0x10/0x20
> > [81738.948624] PGD 2e0d067 PUD 2e0f067 PMD 0
> > [81738.949445] Oops: 0000 [#2] SMP
> > [81738.950237] Modules linked in: binfmt_misc veth ipt_MASQUERADE
> nf_nat_masquerade_ipv4 nf_conntrack_netlink nfnetlink xfrm_user xfrm_algo
> iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 xt_addrtype
> xt_conntrack nf_nat nf_conntrack br_netfilter bridge stp llc overlay
> xt_multiport iptable_filter ip_tables x_tables nfsv3 nfs_acl
> rpcsec_gss_krb5 auth_rpcgss nfsv4 nfs lockd grace nv_peer_mem(OE)
> cachefiles fscache msr rdma_ucm(OE) ib_ucm(OE) rdma_cm(OE) iw_cm(OE)
> configfs ib_ipoib(OE) ib_cm(OE) ib_uverbs(OE) ib_umad(OE) mlx5_ib(OE)
> mlx4_ib(OE) ib_core(OE) mlx4_en(OE) mlx4_core(OE) bonding ipmi_ssif
> intel_rapl x86_pkg_temp_thermal intel_powerclamp coretemp kvm_intel kvm
> irqbypass crct10dif_pclmul crc32_pclmul ghash_clmulni_intel aesni_intel
> aes_x86_64 lrw gf128mul glue_helper ablk_helper cryptd joydev input_leds
> mxm_wmi sb_edac edac_core lpc_ich mei_me mei nvidia_uvm(POE) shpchp
> 8250_fintek ipmi_si wmi acpi_power_meter mac_hid sunrpc ipmi_devintf
> ipmi_msghandler knem(OE) autofs4
> >   nvidia_drm(POE) nvidia_modeset(POE) ast i2c_algo_bit nvidia(POE) ttm
> ixgbe drm_kms_helper syscopyarea ses dca sysfillrect mlx5_core(OE)
> enclosure sysimgblt fb_sys_fops vxlan hid_generic ip6_udp_tunnel usbhid
> udp_tunnel mlx_compat(OE) hid ahci ptp megaraid_sas drm libahci pps_core
> mdio fjes
> > [81738.959336] CPU: 23 PID: 36247 Comm: kworker/u162:0 Tainted: P      D
> W  OE   4.4.0-92-generic #115-Ubuntu
> > [81738.960164] Hardware name: xxxxx
> > [81738.960993] task: ffff8846c0f23800 ti: ffff886cf2f0c000 task.ti:
> ffff886cf2f0c000
> > [81738.961808] RIP: 0010:[<ffffffff810a1370>]  [<ffffffff810a1370>]
> kthread_data+0x10/0x20
> > [81738.962644] RSP: 0000:ffff886cf2f0fab8  EFLAGS: 00010002
> > [81738.963475] RAX: 0000000000000000 RBX: 0000000000000017 RCX:
> ffffffff82109e80
> > [81738.964312] RDX: 0000000000000017 RSI: 0000000000000017 RDI:
> ffff8846c0f23800
> > [81738.965142] RBP: ffff886cf2f0fab8 R08: 00000000ffffffff R09:
> 0000000000000000
> > [81738.965996] R10: ffff8846c0f23860 R11: 0000000000009800 R12:
> 0000000000000000
> > [81738.966825] R13: 0000000000016dc0 R14: ffff8846c0f23800 R15:
> ffff887f7ecd6dc0
> > [81738.967676] FS:  0000000000000000(0000) GS:ffff887f7ecc0000(0000)
> knlGS:0000000000000000
> > [81738.968534] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> > [81738.969369] CR2: 0000000000000028 CR3: 0000000002e0a000 CR4:
> 00000000003406e0
> > [81738.970202] DR0: 0000000000000000 DR1: 0000000000000000 DR2:
> 0000000000000000
> > [81738.971053] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7:
> 0000000000000400
> > [81738.971893] Stack:
> > [81738.972728]  ffff886cf2f0fad0 ffffffff8109b9c1 ffff887f7ecd6dc0
> ffff886cf2f0fb20
> > [81738.973583]  ffffffff8183dab0 ffff887f5f808aa0 ffff886c00000017
> ffff8846c0f23800
> > [81738.974448]  ffff886cf2f10000 ffff8846c0f23ed0 ffff886cf2f0f6d0
> 0000000000000000
> > [81738.975289] Call Trace:
> > [81738.976164]  [<ffffffff8109b9c1>] wq_worker_sleeping+0x11/0x90
> > [81738.977019]  [<ffffffff8183dab0>] __schedule+0x650/0xa30
> > [81738.977844]  [<ffffffff8183dec5>] schedule+0x35/0x80
> > [81738.978668]  [<ffffffff81084465>] do_exit+0x775/0xb00
> > [81738.979484]  [<ffffffff81031c41>] oops_end+0xa1/0xd0
> > [81738.980294]  [<ffffffff810320fb>] die+0x4b/0x70
> > [81738.981091]  [<ffffffff8102f121>] do_trap+0xb1/0x140
> > [81738.981882]  [<ffffffff8102f4a9>] do_error_trap+0x89/0x110
> > [81738.982656]  [<ffffffffc054af96>] ? fscache_put_operation+0x1d6/0x210
> [fscache]
> > [81738.983428]  [<ffffffff810ca782>] ? up+0x32/0x50
> > [81738.984189]  [<ffffffff810d8656>] ? console_unlock+0x216/0x550
> > [81738.984934]  [<ffffffff8102fa10>] do_invalid_op+0x20/0x30
> > [81738.985677]  [<ffffffff81843b0e>] invalid_op+0x1e/0x30
> > [81738.986416]  [<ffffffffc054af96>] ? fscache_put_operation+0x1d6/0x210
> [fscache]
> > [81738.987155]  [<ffffffffc054b1ca>] fscache_op_work_func+0x2a/0x50
> [fscache]
> > [81738.987890]  [<ffffffff8109a625>] process_one_work+0x165/0x480
> > [81738.988631]  [<ffffffff8109a98b>] worker_thread+0x4b/0x4c0
> > [81738.989349]  [<ffffffff8109a940>] ? process_one_work+0x480/0x480
> > [81738.990049]  [<ffffffff810a0cc5>] kthread+0xe5/0x100
> > [81738.990724]  [<ffffffff810a0be0>] ? kthread_create_on_node+0x1e0/
> 0x1e0
> > [81738.991390]  [<ffffffff8184238f>] ret_from_fork+0x3f/0x70
> > [81738.992042]  [<ffffffff810a0be0>] ? kthread_create_on_node+0x1e0/
> 0x1e0
> > [81738.992671] Code: ff ff ff be 49 02 00 00 48 c7 c7 98 b9 cb 81 e8 e7
> 00 fe ff e9 a6 fe ff ff 66 90 0f 1f 44 00 00 48 8b 87 18 05 00 00 55 48 89
> e5 <48> 8b 40 d8 5d c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00
> > [81738.994013] RIP  [<ffffffff810a1370>] kthread_data+0x10/0x20
> > [81738.994627]  RSP <ffff886cf2f0fab8>
> > [81738.995216] CR2: ffffffffffffffd8
> > [81738.995785] ---[ end trace dbbc7c60b0e5f2a5 ]---
> > [81738.997887] Fixing recursive fault but reboot is needed!
> >
> > (hardlockups seem to follow)
> >
> > --
> > Linux-cachefs mailing list
> > Linux-cachefs@xxxxxxxxxx
> > https://www.redhat.com/mailman/listinfo/linux-cachefs
>
>
>


-- 
Kiran
--
Linux-cachefs mailing list
Linux-cachefs@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cachefs




[Index of Archives]     [LARTC]     [Bugtraq]     [Yosemite Forum]
  Powered by Linux