Re: [PATCH 02/15] libnfs.a: Allow multiple RPC listeners to share listener port number

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Oct 11, 2010, at 9:22 AM, Steve Dickson wrote:

> 
> 
> On 10/10/2010 08:20 PM, Jim Rees wrote:
>> Chuck Lever wrote:
>> 
>>  Normally, when "-p" is not specified on the mountd command line, the
>>  TI-RPC library chooses random port numbers for each listener.  If a
>>  port number _is_ specified on the command line, all the listeners
>>  will get the same port number, so SO_REUSEADDR needs to be set on
>>  each socket.
>> 
>>  Thus we can't let TI-RPC create the listener sockets for us in this
>>  case; we must create them ourselves and then set SO_REUSEADDR (and
>>  other socket options) by hand.
>> 
>> It bothers me that there are two separate code paths in two separate
>> libraries for these two nearly identical cases.

Are there?  Where's the other one?

The two implementations I'm aware of are both in libnfs.a.  One is for legacy RPC, and the other is for TI-RPC, and they both reside in support/nfs/ (and thus in libnfs.a, which is shared by the daemons in nfs-utils).  What am I missing?

>>  Wouldn't it be better to
>> add this functionality to tirpc?
> I have to agree... Why can't we simply had the tirpc code a socket
> that has the SO_REUSEADDR set on it?

I'm not saying absolutely not, but ATM I don't see the utility of doing what you suggest.

TI-RPC has a more-or-less standard API across O/S implementations.  I would think we should avoid making Linux-specific changes to the API, as that would limit the portability of RPC applications on Linux.  That's why I think this kind of thing belongs in nfs-utils, and not in TI-RPC.  If we put this code into TI-RPC, would we also need to do the same for the legacy glibc RPC implementation?

Secondly, SO_REUSEADDR is only half the fix.  The other half is xprt caching.  Simply changing the library to set SO_REUSEADDR is going to help only the case where the application wants just one version (eg, just MNT v1).

Creating a socket and passing it to svc_tli_create(3t) is exactly how the library was intended to be used in this case.  This is how rpcbind works, for instance, which suggests to me that anyone familiar with RPC servers will recognize this code idiom for what it is.

The Sun ONC+ documentation also recommends the use of svc_tli_create(3t) when an application needs to pass in a socket or specify a bind address.  This is referred to as the "expert" level server-side interface, which suggests this is going to be used rarely and only in special situations.

Can you suggest an API change to TI-RPC that would allow server applications to request an internally created socket with SO_REUSEADDR set?

-- 
chuck[dot]lever[at]oracle[dot]com




--
To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux Filesystem Development]     [Linux USB Development]     [Linux Media Development]     [Video for Linux]     [Linux NILFS]     [Linux Audio Users]     [Yosemite Info]     [Linux SCSI]

  Powered by Linux