Re: [PATCH 1/1] rpcbind: Fix DoS vulnerability in rpcbind.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> On Aug 8, 2021, at 5:18 PM, Dai Ngo <dai.ngo@xxxxxxxxxx> wrote:
> 
> On 8/8/21 12:36 PM, Chuck Lever III wrote:
>> 
>>> Signed-off-by: dai.ngo@xxxxxxxxxx
>>> ---
>>> src/rpcb_svc_com.c | 3 +++
>>> 1 file changed, 3 insertions(+)
>>> 
>>> diff --git a/src/rpcb_svc_com.c b/src/rpcb_svc_com.c
>>> index 1743dadf5db7..cb33519010d3 100644
>>> --- a/src/rpcb_svc_com.c
>>> +++ b/src/rpcb_svc_com.c
>>> @@ -1048,6 +1048,8 @@ netbuffree(struct netbuf *ap)
>>> }
>>> 
>>> 
>>> +extern void __svc_destroy_idle(int, bool_t);
>> Your libtirpc patch does not add __svc_destroy_idle()
>> as part of the official libtirpc API. We really must
>> avoid adding "secret" library entry points (not to
>> mention, as SteveD pointed out, a SONAME change
>> would be required).
>> 
>> rpcbind used to call __svc_clean_idle(), which is
>> undocumented (as far as I can tell).
> 
> I try to use the same mechanism as __svc_clean_idle, assuming
> it was an acceptable approach.

Understood. I'm not quibbling with that mechanism, but
with the addition of another private API. Private APIs
are very difficult to maintain in the long run.


>> 44bf15b86861 removed that call.
> 
> which is one of the bugs that causes this problem.
> 
>> I think a better approach would be to duplicate your
>> proposed __svc_destroy_idle() code in rpcbind, since
>> rpcbind is not actually using the library's RPC
>> dispatcher.
> 
> then we have to do the same for nfs-utils.

Exactly.


> That means the
> same code exits in 3 places: libtirpc, rpcbind and nfs-utils.
> And any new consumer that uses it own my_svc_run will
> need its own __svc_destroy_idle. It does not sound very
> efficient.

You mean you don't particularly care for the duplication
of the code. Fair enough.

I don't have any problem copying this small piece of
code into callers that long ago went to the effort
of implementing their own polling loops. The code
duplication here isn't bothering me much.

If you see a way to convert these external callers to
use enough of the library svc APIs that they get to
use __svc_destroy_idle() automatically, that would to
me be an acceptable alternative.


>> That would get rid of the technical debt of calling
>> an undocumented library API.
>> 
>> The helper would need to call svc_vc_destroy()
>> rather than __xprt_unregister_unlocked() and
>> __svc_vc_dodestroy(). Also not clear to me that
>> rpcbind's my_svc_run() needs the protection of
>> holding svc_fd_lock, if rpcbind itself is not
>> multi-threaded?
> 
> The lock is held by __svc_destroy_idle, or __svc_clean_idle
> before it was removed, in libtirpc. I think libtirpc is
> multi-threaded.

libtirpc is multi-threaded, so __svc_destroy_idle()
is correct to take that lock. I'm suggesting that
the copies of destroy_idle() in the external callers
would likely not need to take this lock. I don't see
any references to it in the rpcbind source code, so
it's probably not necessary for the rpcbind version
of the destroy_idle helper to take the lock.

Note that svc_fd_lock is not part of the public
library API either.


>> The alternative would be to construct a fully
>> documented public API with man pages and header
>> declarations. I'm not terribly comfortable with
>> diverging from the historical Sun TI-RPC programming
>> interface, however.
> 
> What I'm trying to do is to follow what __svc_clean_idle
> used to do before it was removed.

Sure, I get that. That seems like a prudent approach.


> I can't totally re-use
> __svc_clean_idle because it was written to work with
> svc_fdset and select.  The current svc_run uses poll so
> the handling of idle connections is a little difference.

Introducing yet another private API should be off the
table. An alternative fix would be to add a public API
that is documented and appears in the headers. Again,
I would rather not diverge from the historical Sun
TI-RPC library API, and SteveD would rather not have
to change the library's SONAME, so I'm hesitant to
add a public API too.

The bottom line is that adding a library API to handle
idle connections is the right architectural approach,
all things being equal. But the goal for this library
is compatibility with a historical API that is also
available in other operating systems. We have to stick
with that API if we can.


--
Chuck Lever







[Index of Archives]     [Linux Filesystem Development]     [Linux USB Development]     [Linux Media Development]     [Video for Linux]     [Linux NILFS]     [Linux Audio Users]     [Yosemite Info]     [Linux SCSI]

  Powered by Linux