Re: "Cannot allocate memory" on ring creation (not RLIMIT_MEMLOCK)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 12/17/20 1:19 AM, Dmitry Kadashev wrote:
> Hi,
> 
> We've ran into something that looks like a memory accounting problem
> in the kernel / io_uring code. We use multiple rings per process, and
> generally it works fine. Until it does not - new ring creation just
> fails with ENOMEM. And at that point it fails consistently until the
> box is rebooted.
> 
> More details: we use multiple rings per process, typically they are
> initialized on the process start (not necessarily, but that is not
> important here, let's just assume all are initialized on the process
> start). On a freshly booted box everything works fine. But after a
> while - and some process restarts - io_uring_queue_init() starts to
> fail with ENOMEM. Sometimes we see it fail, but then subsequent ones
> succeed (in the same process), but over time it gets worse, and
> eventually no ring can be initialized. And once that happens the only
> way to fix the problem is to restart the box.  Most of the mentioned
> restarts are graceful: a new process is started and then the old one
> is killed, possibly with the KILL signal if it does not shut down in
> time.  Things work fine for some time, but eventually we start getting
> those errors.
> 
> Originally we've used 5.6.6 kernel, but given the fact quite a few
> accounting issues were fixed in io_uring in 5.8, we've tried 5.9.5 as
> well, but the issue is not gone.
> 
> Just in case, everything else seems to be working fine, it just falls
> back to the thread pool instead of io_uring, and then everything
> continues to work just fine.
> 
> I was not able to spot anything suspicious in the /proc/meminfo. We
> have RLIMIT_MEMLOCK set to infinity. And on a box that currently
> experiences the problem /proc/meminfo shows just 24MB as locked.
> 
> Any pointers to how can we debug this?

I've read through this thread, but haven't had time to really debug it
yet. I did try a few test cases, and wasn't able to trigger anything.
The signal part is interesting, as it would cause parallel teardowns
potentially. And I did post a patch for that yesterday, where I did spot
a race in the user mm accounting. I don't think this is related to this
one, but would still be useful if you could test with this applied:

https://lore.kernel.org/io-uring/20201217152105.693264-3-axboe@xxxxxxxxx/T/#u

just in case...

-- 
Jens Axboe




[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux