On 2019-01-17 21:50, Jeff Moyer wrote:
Jens Axboe <axboe@xxxxxxxxx> writes:
On 1/17/19 1:09 PM, Jens Axboe wrote:
On 1/17/19 1:03 PM, Jeff Moyer wrote:
Jens Axboe <axboe@xxxxxxxxx> writes:
On 1/17/19 5:48 AM, Roman Penyaev wrote:
On 2019-01-16 18:49, Jens Axboe wrote:
[...]
+static int io_allocate_scq_urings(struct io_ring_ctx *ctx,
+ struct io_uring_params *p)
+{
+ struct io_sq_ring *sq_ring;
+ struct io_cq_ring *cq_ring;
+ size_t size;
+ int ret;
+
+ sq_ring = io_mem_alloc(struct_size(sq_ring, array,
p->sq_entries));
It seems that sq_entries, cq_entries are not limited at all. Can
nasty
app consume a lot of kernel pages calling io_setup_uring() from a
loop
passing random entries number? (or even better: decreasing entries
number,
in order to consume all pages orders with min number of loops).
Yes, that's an oversight, we should have a limit in place. I'll add
that.
Can we charge the ring memory to the RLIMIT_MEMLOCK as well? I'd
prefer
not to repeat the mistake of fs.aio-max-nr.
Sure, we can do that. With the ring limited in size (it's now 4k
entries
at most), the amount of memory gobbled up by that is much smaller
than
the fixed buffers. A max sized ring is about 256k of memory.
Per io_uring. Nothing prevents a user from calling io_uring_setup in a
loop and continuing to gobble up memory.
What if we set a sane limit for a uring instance (not for the whole
io_uring),
but allocate rings on mmap? Then greedy / nasty app will be killed by
oom.
--
Roman