RE: [PATCH v4 00/27] add block layout driver to pnfs client

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> -----Original Message-----
> From: Jim Rees [mailto:rees@xxxxxxxxx]
> Sent: Monday, August 01, 2011 10:22 PM
> To: Myklebust, Trond
> Cc: Peng Tao; Adamson, Andy; Christoph Hellwig; linux-
> nfs@xxxxxxxxxxxxxxx; peter honeyman
> Subject: Re: [PATCH v4 00/27] add block layout driver to pnfs client
> 
> Trond Myklebust wrote:
> 
>   On Mon, 2011-08-01 at 17:10 -0400, Trond Myklebust wrote:
>   > Looking at the callback code, I see that if tbl-
> >highest_used_slotid !=
>   > 0, then we BUG() while holding the backchannel's
tbl->slot_tbl_lock
>   > spinlock. That seems a likely candidate for the above hang.
>   >
>   > Andy, how we are guaranteed that tbl->highest_used_slotid won't
> take
>   > values other than 0, and why do we commit suicide when it does? As
> far
>   > as I can see, there is no guarantee that we call
> nfs4_cb_take_slot() in
>   > nfs4_callback_compound(), however we appear to unconditionally
call
>   > nfs4_cb_free_slot() provided there is a session.
>   >
>   > The other strangeness would be the fact that there is nothing
> enforcing
>   > the NFS4_SESSION_DRAINING flag. If the session is draining, then
> the
>   > back-channel simply ignores that and goes ahead with processing
the
>   > callback. Is this to avoid deadlocks with the server returning
>   > NFS4ERR_BACK_CHAN_BUSY when the client does a DESTROY_SESSION?
> 
>   How about something like the following?
> 
> I applied this patch, along with Andy's htonl correction.  It now
fails
> in a
> different way, with a deadlock.  The test runs several processes in
> parallel.
> 
> INFO: task t_mtab:1767 blocked for more than 10 seconds.
> "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this
> message.
> t_mtab          D 0000000000000000     0  1767   1634 0x00000080
>  ffff8800376afd48 0000000000000086 ffff8800376afcd8 ffffffff00000000
>  ffff8800376ae010 ffff880037ef4500 0000000000012c80 ffff8800376affd8
>  ffff8800376affd8 0000000000012c80 ffffffff81a0c020 ffff880037ef4500
> Call Trace:
>  [<ffffffff8145411a>] __mutex_lock_common+0x110/0x171
>  [<ffffffff81454191>] __mutex_lock_slowpath+0x16/0x18
>  [<ffffffff81454257>] mutex_lock+0x1e/0x32
>  [<ffffffff811169a2>] kern_path_create+0x75/0x11e
>  [<ffffffff810fe836>] ? kmem_cache_alloc+0x5f/0xf1
>  [<ffffffff812127d9>] ? strncpy_from_user+0x43/0x72
>  [<ffffffff81114077>] ? getname_flags+0x158/0x1d2
>  [<ffffffff81116a86>] user_path_create+0x3b/0x52
>  [<ffffffff81117466>] sys_linkat+0x9a/0x120
>  [<ffffffff8109932e>] ? audit_syscall_entry+0x119/0x145
>  [<ffffffff81117505>] sys_link+0x19/0x1c
>  [<ffffffff8145b612>] system_call_fastpath+0x16/0x1b

That's a different issue. If you do an 'echo t >/proc/sysrq-trigger',
then do you see any other process that is stuck in the nfs layer and
that might be holding the inode->i_mutex?


--
To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux Filesystem Development]     [Linux USB Development]     [Linux Media Development]     [Video for Linux]     [Linux NILFS]     [Linux Audio Users]     [Yosemite Info]     [Linux SCSI]

  Powered by Linux