On Mon, Jul 29, 2024 at 11:16:02AM -0700, Linus Torvalds wrote: > On Mon, 29 Jul 2024 at 10:59, Dmitry Torokhov <dmitry.torokhov@xxxxxxxxx> wrote: > > > > Can I write a gigabyte of data to disk? Terabyte? Is petabyte too much? > > What if I don't have enough physical disk. Do we "fix" write() not to > > take size_t length? > > Dmitry, that's *EXACTLY* what we did decades ago. What exactly did you do? Limit size of data userspace can request to be written? What is the max allowed size then? Can I stick a warning in the code to complain when it is "too big"? > > Your argument is bogus garbage. We do various arbitrary limits exactly > to head off problems early. So does this mean that we should disallow any and all allocations above 4k because they can potentially fail, depending on the system state? Or maybe we should be resilient and fail gracefully instead? It would help if you expanded why exactly my argument is a garbage and what the problem is with recognizing that memory is a depletable resource (like a lot of other resources, including storage) and there's never a "completely safe" amount that can be used, so trying to introduce it is futile. Thanks. -- Dmitry