On 08/16/2016 10:21 PM, Michael Kerrisk (man-pages) wrote:
@@ -1132,8 +1136,8 @@ long pipe_fcntl(struct file *file, unsigned int cmd, unsigned long arg)
if (!capable(CAP_SYS_RESOURCE) && size > pipe_max_size) {
ret = -EPERM;
goto out;
- } else if ((too_many_pipe_buffers_hard(pipe->user) ||
- too_many_pipe_buffers_soft(pipe->user)) &&
+ } else if ((too_many_pipe_buffers_hard(pipe->user, nr_pages) ||
+ too_many_pipe_buffers_soft(pipe->user, nr_pages)) &&
!capable(CAP_SYS_RESOURCE) &&
!capable(CAP_SYS_ADMIN)) {
ret = -EPERM;
Isn't there also a race where two or more concurrent pipe()/fnctl()
calls can together push us over the limits before the accounting is done?
I guess there is!
I think there really ought to be a check after doing the accounting if
we really want to be meticulous here.
Let me confirm what I understand from your comment: because of the race,
then a user could subvert the checks and allocate an arbitrary amount
of kernel memory for pipes. Right?
I'm not sure what you mean by "a check after doing the accounting". Is not the
only solution here some kind of lock around the check+accounting steps?
Instead of doing atomic_long_read() in the check + atomic_long_add() for
accounting we could do a single speculative atomic_long_add_return() and
then if it goes above the limit we can lower it again with atomic_sub()
when aborting the operation (if it doesn't go above the limit we don't
need to do anything).
Vegard
--
To unsubscribe from this list: send the line "unsubscribe stable" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html