On 6/12/20 9:16 AM, Jens Axboe wrote: > On 6/11/20 8:23 PM, Bijan Mottahedeh wrote: >> Long term, it makes sense to separate reporting and enforcing of pinned >> memory usage. >> >> Signed-off-by: Bijan Mottahedeh <bijan.mottahedeh@xxxxxxxxxx> >> >> It is useful to view >> --- >> fs/io_uring.c | 4 ++++ >> 1 file changed, 4 insertions(+) >> >> diff --git a/fs/io_uring.c b/fs/io_uring.c >> index 4248726..cf3acaa 100644 >> --- a/fs/io_uring.c >> +++ b/fs/io_uring.c >> @@ -7080,6 +7080,8 @@ static int io_sq_offload_start(struct io_ring_ctx *ctx, >> static void io_unaccount_mem(struct user_struct *user, unsigned long nr_pages) >> { >> atomic_long_sub(nr_pages, &user->locked_vm); >> + if (current->mm) >> + atomic_long_sub(nr_pages, ¤t->mm->pinned_vm); >> } >> >> static int io_account_mem(struct user_struct *user, unsigned long nr_pages) >> @@ -7096,6 +7098,8 @@ static int io_account_mem(struct user_struct *user, unsigned long nr_pages) >> return -ENOMEM; >> } while (atomic_long_cmpxchg(&user->locked_vm, cur_pages, >> new_pages) != cur_pages); >> + if (current->mm) >> + atomic_long_add(nr_pages, ¤t->mm->pinned_vm); >> >> return 0; >> } > > current->mm should always be valid for these, so I think you can skip the > checking of that and just make it unconditional. Two other issues with this: - It's an atomic64, so seems more appropriate to use the atomic64 helpers for this one. - The unaccount could potentially be a different mm, if the ring is shared and one task sets it up while another tears it down. So we'd need something to ensure consistency here. -- Jens Axboe