Re: [PATCH v4 2/2] NFSD: add rpc_status entry in nfsd debug filesystem

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Aug 04, 2023 at 09:56:59AM +0200, Lorenzo Bianconi wrote:
> [...]
> > > +	atomic_inc(&rqstp->rq_status_counter);
> > > +
> > 
> > Does this really have to be an atomic_t ? Seems like nfsd_dispatch
> > is the only function updating it. You might need release semantics
> > here and acquire semantics in nfsd_rpc_status_show(). I'd rather
> > avoid a full-on atomic op in nfsd_dispatch() unless it's absolutely
> > needed.
> 
> ack, I agree. I will work on it.
> 
> > 
> > Also, do you need to bump the rq_status_counter in the other RPC
> > dispatch routines (lockd and nfs callback) too?
> 
> the only consumer at the moment is nfsd, do you think we should add them in
> advance for lockd as well?

Ah... the reporting facility is added to the NFS server, not to
SunRPC. My mistake. I got confused because struct svc_rqst is an RPC
layer object.

So this is kind of a layering violation. The counter is for NFS
procedures, but it's being added to svc_rqst, which is an RPC layer
object. We're trying to fix up the places where NFSD stashes its
state in RPC structures. (I recall that Neil suggested this
arrangement).

But I don't have a better suggestion for where to put the counter.
Leave it where it is for now, and we'll come up with something
down the road ... or we'll decide we want the same watchdog for
all RPC services. ;-)


> > >  	rp = NULL;
> > >  	switch (nfsd_cache_lookup(rqstp, &rp)) {
> > >  	case RC_DOIT:
> > > @@ -1074,6 +1076,8 @@ int nfsd_dispatch(struct svc_rqst *rqstp)
> > >  	if (!proc->pc_encode(rqstp, &rqstp->rq_res_stream))
> > >  		goto out_encode_err;
> > >  
> > > +	atomic_inc(&rqstp->rq_status_counter);
> > > +
> > >  	nfsd_cache_update(rqstp, rp, rqstp->rq_cachetype, statp + 1);
> > >  out_cached_reply:
> > >  	return 1;
> > > @@ -1149,3 +1153,121 @@ int nfsd_pool_stats_release(struct inode *inode, struct file *file)
> > >  	mutex_unlock(&nfsd_mutex);
> > >  	return ret;
> > >  }
> > > +
> > > +static int nfsd_rpc_status_show(struct seq_file *m, void *v)
> > > +{
> > > +	struct inode *inode = file_inode(m->file);
> > > +	struct nfsd_net *nn = net_generic(inode->i_sb->s_fs_info, nfsd_net_id);
> > > +	int i;
> > > +
> > > +	rcu_read_lock();
> > > +
> > > +	for (i = 0; i < nn->nfsd_serv->sv_nrpools; i++) {
> > > +		struct svc_rqst *rqstp;
> > > +
> > > +		list_for_each_entry_rcu(rqstp,
> > > +				&nn->nfsd_serv->sv_pools[i].sp_all_threads,
> > > +				rq_all) {
> > > +			struct nfsd_rpc_status_info {
> > > +				struct sockaddr daddr;
> > > +				struct sockaddr saddr;
> > > +				unsigned long rq_flags;
> > > +				__be32 rq_xid;
> > > +				u32 rq_prog;
> > > +				u32 rq_vers;
> > > +				const char *pc_name;
> > > +				ktime_t rq_stime;
> > > +				u32 opnum[NFSD_MAX_OPS_PER_COMPOUND]; /* NFSv4 compund */
> > > +			} rqstp_info;
> > > +			unsigned int status_counter;
> > > +			char buf[RPC_MAX_ADDRBUFLEN];
> > > +			int j, opcnt = 0;
> > > +
> > > +			if (!test_bit(RQ_BUSY, &rqstp->rq_flags))
> > > +				continue;
> > > +
> > > +			status_counter = atomic_read(&rqstp->rq_status_counter);
> > 
> > Neil said:
> > 
> > > I suggest you add add a counter to the rqstp which is incremented from
> > > even to odd after parsing a request - including he v4 parsing needed to
> > > have a sable ->opcnt - and then incremented from odd to even when the
> > > request is complete.
> > > Then this code samples the counter, skips the rqst if the counter is
> > > even, and resamples the counter after collecting the data.  If it has
> > > changed, the drop the record.
> > 
> > I don't see a check if the status counter is even.
> > 
> > Also, as above, I'm not sure atomic_read() is necessary here. Maybe
> > just READ_ONCE() ? Neil, any thoughts?
> 
> 
> I used the RQ_BUSY check instead of checking if the counter is even, but I can
> see the point now. I will fix it.
> 
> > 
> > 
> > > +
> > > +			rqstp_info.rq_xid = rqstp->rq_xid;
> > > +			rqstp_info.rq_flags = rqstp->rq_flags;
> > > +			rqstp_info.rq_prog = rqstp->rq_prog;
> > > +			rqstp_info.rq_vers = rqstp->rq_vers;
> > > +			rqstp_info.pc_name = svc_proc_name(rqstp);
> > > +			rqstp_info.rq_stime = rqstp->rq_stime;
> > > +			memcpy(&rqstp_info.daddr, svc_daddr(rqstp),
> > > +			       sizeof(struct sockaddr));
> > > +			memcpy(&rqstp_info.saddr, svc_addr(rqstp),
> > > +			       sizeof(struct sockaddr));
> > > +
> > > +#ifdef CONFIG_NFSD_V4
> > > +			if (rqstp->rq_vers == NFS4_VERSION &&
> > > +			    rqstp->rq_proc == NFSPROC4_COMPOUND) {
> > > +				/* NFSv4 compund */
> > > +				struct nfsd4_compoundargs *args = rqstp->rq_argp;
> > > +
> > > +				opcnt = args->opcnt;
> > > +				for (j = 0; j < opcnt; j++) {
> > > +					struct nfsd4_op *op = &args->ops[j];
> > > +
> > > +					rqstp_info.opnum[j] = op->opnum;
> > > +				}
> > > +			}
> > > +#endif /* CONFIG_NFSD_V4 */
> > > +
> > > +			/* In order to detect if the RPC request is pending and
> > > +			 * RPC info are stable we check if rq_status_counter
> > > +			 * has been incremented during the handler processing.
> > > +			 */
> > > +			if (status_counter != atomic_read(&rqstp->rq_status_counter))
> > > +				continue;
> > > +
> > > +			seq_printf(m,
> > > +				   "0x%08x, 0x%08lx, 0x%08x, NFSv%d, %s, %016lld,",
> > > +				   be32_to_cpu(rqstp_info.rq_xid),
> > > +				   rqstp_info.rq_flags,
> > > +				   rqstp_info.rq_prog,
> > > +				   rqstp_info.rq_vers,
> > > +				   rqstp_info.pc_name,
> > > +				   ktime_to_us(rqstp_info.rq_stime));
> > > +
> > > +			seq_printf(m, " %s,",
> > > +				   __svc_print_addr(&rqstp_info.saddr, buf,
> > > +						    sizeof(buf), false));
> > > +			seq_printf(m, " %s,",
> > > +				   __svc_print_addr(&rqstp_info.daddr, buf,
> > > +						    sizeof(buf), false));
> > > +			for (j = 0; j < opcnt; j++)
> > > +				seq_printf(m, " %s%s",
> > > +					   nfsd4_op_name(rqstp_info.opnum[j]),
> > > +					   j == opcnt - 1 ? "," : "");
> > > +			seq_puts(m, "\n");
> > > +		}
> > > +	}
> > > +
> > > +	rcu_read_unlock();
> > > +
> > > +	return 0;
> > > +}
> > > +
> > > +/**
> > > + * nfsd_rpc_status_open - Atomically copy a write verifier
> > 
> > The kdoc comment maybe was copied, pasted, and then not updated?
> 
> ack, I will fix it.
> 
> Regards,
> Lorenzo
> 
> > 
> > > + * @inode: entry inode pointer.
> > > + * @file: entry file pointer.
> > > + *
> > > + * This routine dumps pending RPC requests info queued into nfs server.
> > > + */
> > > +int nfsd_rpc_status_open(struct inode *inode, struct file *file)
> > > +{
> > > +	struct nfsd_net *nn = net_generic(inode->i_sb->s_fs_info, nfsd_net_id);
> > > +
> > > +	mutex_lock(&nfsd_mutex);
> > > +	if (!nn->nfsd_serv) {
> > > +		mutex_unlock(&nfsd_mutex);
> > > +		return -ENODEV;
> > > +	}
> > > +
> > > +	svc_get(nn->nfsd_serv);
> > > +	mutex_unlock(&nfsd_mutex);
> > > +
> > > +	return single_open(file, nfsd_rpc_status_show, inode->i_private);
> > > +}
> > > diff --git a/include/linux/sunrpc/svc.h b/include/linux/sunrpc/svc.h
> > > index fe1394cc1371..cb516da9e270 100644
> > > --- a/include/linux/sunrpc/svc.h
> > > +++ b/include/linux/sunrpc/svc.h
> > > @@ -270,6 +270,7 @@ struct svc_rqst {
> > >  						 * net namespace
> > >  						 */
> > >  	void **			rq_lease_breaker; /* The v4 client breaking a lease */
> > > +	atomic_t		rq_status_counter; /* RPC processing counter */
> > >  };
> > >  
> > >  #define SVC_NET(rqst) (rqst->rq_xprt ? rqst->rq_xprt->xpt_net : rqst->rq_bc_net)
> > > diff --git a/net/sunrpc/svc.c b/net/sunrpc/svc.c
> > > index 587811a002c9..44eac83b35a1 100644
> > > --- a/net/sunrpc/svc.c
> > > +++ b/net/sunrpc/svc.c
> > > @@ -1629,7 +1629,7 @@ const char *svc_proc_name(const struct svc_rqst *rqstp)
> > >  		return rqstp->rq_procinfo->pc_name;
> > >  	return "unknown";
> > >  }
> > > -
> > > +EXPORT_SYMBOL_GPL(svc_proc_name);
> > >  
> > >  /**
> > >   * svc_encode_result_payload - mark a range of bytes as a result payload
> > > -- 
> > > 2.41.0
> > > 
> > 
> > -- 
> > Chuck Lever
> > 



-- 
Chuck Lever



[Index of Archives]     [Linux Filesystem Development]     [Linux USB Development]     [Linux Media Development]     [Video for Linux]     [Linux NILFS]     [Linux Audio Users]     [Yosemite Info]     [Linux SCSI]

  Powered by Linux