Re: Inode limitation for overlayfs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



 ---- 在 星期四, 2020-03-26 15:34:13 Amir Goldstein <amir73il@xxxxxxxxx> 撰写 ----
 > On Thu, Mar 26, 2020 at 7:45 AM Chengguang Xu <cgxu519@xxxxxxxxxxxx> wrote:
 > >
 > > Hello,
 > >
 > > On container use case, in order to prevent inode exhaustion on host file system by particular containers,  we would like to add inode limitation for containers.
 > > However,  current solution for inode limitation is based on project quota in specific underlying filesystem so it will also count deleted files(char type files) in overlay's upper layer.
 > > Even worse, users may delete some lower layer files for getting more usable free inodes but the result will be opposite (consuming more inodes).
 > >
 > > It is somewhat different compare to disk size limitation for overlayfs, so I think maybe we can add a limit option just for new files in overlayfs. What do you think?
 > 
 > The questions are where do we store the accounting and how do we maintain them.
 > An answer to those questions could be - in the inode index:
 > 
 > Currently, with nfs_export=on, there is already an index dir containing:
 > - 1 hardlink per copied up non-dir inode
 > - 1 directory per copied-up directory
 > - 1 whiteout per whiteout in upperdir (not an hardlink)
 > 

Hi Amir,

Thanks for quick response and detail information. 

I think the simplest way is just store accounting info in memory(maybe  in s_fs_info).
At very first, I just thought  doing it for container use case, for container, it will be 
enough because the upper layer is always empty at starting time and will be destroyed 
at ending time.  

Adding a meta info to index dir is a  better solution for general use case but it seems
more complicated and I'm not sure if there are other use cases concern with this problem. 
Suggestion?


 > We can also make this behavior independent of nfs_export feature.
 > In the past, I proposed the option index=all for this behavior.
 > 
 > On mount, in ovl_indexdir_cleanup(), the index entries for file/dir/whiteout
 > can be counted and then maintained on index add/remove.
 > 
 > Now if you combine that with project quotas on upper/work dir, you get:
 > <Total upper/work inodes> = <pure upper inodes> + <non-dir index count> +
 >                                            2*<dir index count> +
 > 2*<whiteout index count>

I'm not clear what the exact relationships between those indexes and nfs_export
but  if possible I hope having  separated switches for every index functions and a total
switch(index=all) to enable all index functions at same time.

 > 
 > Assuming that you know the total from project quotas and the index counts
 > from overlayfs, you can calculate total pure upper.
 > 
 > Now you *can* implement upper inodes quota within overlayfs, but you
 > can also do that without changing overlayfs at all assuming you can
 > allow some slack in quota enforcement -
 > periodically scan the index dir and adjust project quota limits.

Dynamically changing inode limit  looks  too complicated to implement in management system
and having different quota limit during lifetime for same container may cause confusion to sys admins. 
So I still hope to solve this problem on overlayfs layer.

 > 
 > Note that if inodes are really expensive on your system, index_all
 > wastes 1 inode per whiteout + 1 inode per copied up dir, but those
 > counts should be pretty small compared to number of pure upper inodes
 > and copied up files.
 > 


Thanks,
cgxu.





[Index of Archives]     [Linux Filesystems Devel]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux