On Wed, Nov 23, 2022 at 09:36:15AM +0100, Lukas Czerner wrote: > On Tue, Nov 22, 2022 at 11:58:33PM -0800, Christoph Hellwig wrote: > > On Tue, Nov 22, 2022 at 03:21:17PM +0100, Lukas Czerner wrote: > > > > That seems like a good idea for memory usage, but I think this might > > > > also make the code much simpler, as that just requires fairly trivial > > > > quota_read and quota_write methods in the shmem code instead of new > > > > support for an in-memory quota file. > > > > > > You mean like the implementation in the v1 ? > > > > Having now found it: yes. > > > > Jan, > > do you have any argument for this, since it was your suggestion? > > I also think that the implementation is much simpler with in-memory > dquots because we will avoid all the hassle with creating and > maintaining quota file in a proper format. It's not just reads and > writes it's the entire machinery befind it in quota_v2.c and quota_tree.c. > > But it is true that even with only user modified dquots being > non-reclaimable until unmount it could theoreticaly represent a > substantial memory consumption. Although I do wonder if this problem > is even real. How many user/group ids would you expect extremely heavy > quota user would have the limits set for? 1k, 10k, million, or even > more? Do you know? > I don't know this code well enough to have a strong opinion on the v1 vs. v2 approach in general, but FWIW it does seem to me that the benefit of v1 from a memory savings perspective is perhaps overstated. AFAICT, tmpfs already pins inodes/denties (notably larger than dquots) in-core for the lifetime of the inode, so it's not like we'll be saving much memory from dquots that are actually in-use. I think this dquot memory should be limited indirectly by the max inode restriction, as well. That means the potential wastage is measured in dquots that are no longer referenced, but have previously had a non-default quota limit set by the admin, right? Even with the v1 approach, I don't think it's wise to just push such otherwise unused dquots into swap space indefinitely. Perhaps a reasonable approach to the memory usage issue is to just cap the number of dquots that are allowed to have custom limits on tmpfs? E.g., to echo Lukas above.. if there was a cap of something like 512-1k custom quota limits, would that really be a problem for quota users on tmpfs? Other users would still be covered by the default mount-time limits. Of course, you could always make such a cap flexible as a % of tmpfs size, or configurable via mount option, etc. Just a thought. Brian > -Lukas > >