On Mon, Sep 9, 2019 at 9:17 AM Vitaly Wool <vitalywool@xxxxxxxxx> wrote: > > On Mon, Sep 9, 2019 at 2:14 AM Agustín DallʼAlba > <agustin@xxxxxxxxxxxxxxx> wrote: > > > > Hello, > > > > > Would you care to test with > > > https://bugzilla.kernel.org/attachment.cgi?id=284883 ? That one > > > should > > > fix the problem you're facing. > > > > Thank you, my machine doesn't crash when stressed anymore. :) > > That's good to hear :) I hope the fix gets into 5.3. > > > However trace 2 (__zswap_pool_release blocked for more than xxxx > > seconds) still happens. > > That one is pretty new and seems to have been caused by > d776aaa9895eb6eb770908e899cb7f5bd5025b3c ("mm/z3fold.c: fix race > between migration and destruction"). > I'm looking into this now and CC'ing Henry just in case. Agustin, could you please try reverting that commit? I don't think it's working as it should. > > ~Vitaly > > > > > > ===================================== > > > > > TRACE 2: z3fold_zpool_destroy blocked > > > > > ===================================== > > > > > > > > > > INFO: task kworker/2:3:335 blocked for more than 122 seconds. > > > > > Not tainted 5.3.0-rc7-1-ARCH #1 > > > > > "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. > > > > > kworker/2:3 D 0 335 2 0x80004080 > > > > > Workqueue: events __zswap_pool_release > > > > > Call Trace: > > > > > ? __schedule+0x27f/0x6d0 > > > > > schedule+0x43/0xd0 > > > > > z3fold_zpool_destroy+0xe9/0x130 > > > > > ? wait_woken+0x70/0x70 > > > > > zpool_destroy_pool+0x5c/0x90 > > > > > __zswap_pool_release+0x6a/0xb0 > > > > > process_one_work+0x1d1/0x3a0 > > > > > worker_thread+0x4a/0x3d0 > > > > > kthread+0xfb/0x130 > > > > > ? process_one_work+0x3a0/0x3a0 > > > > > ? kthread_park+0x80/0x80 > > > > > ret_from_fork+0x35/0x40 > > > > Kind regards. > >