Re: [PATCH v5 00/22] Readdir enhancements

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 2020-11-12 at 17:34 +0200, Guy Keren wrote:
> just a general question: since the cache seems to cause many problems
> when dealing with very large directories, and since all solutions
> proposed until now don't seem to fully solve those problems, won't an
> approach such as "if the directory entries count exceeded X - stop
> using the cache completely" - where X is proportional to the size of
> the directory entries cache size limit - make the code simpler, and
> less prone to bugs of this sort?
> 
> i *think* we can understand that for a directory with millions of
> files, we'll not have efficient caching on the client side, while
> limiting ourselves to reasonable RAM consumption?
> 

Again, I disagree.

If you have a mostly-read directory with millions of files (e.g. data
pool) and lots of processes searching, then caching is both useful and
appropriate.

> 

-- 
Trond Myklebust
Linux NFS client maintainer, Hammerspace
trond.myklebust@xxxxxxxxxxxxxxx






[Index of Archives]     [Linux Filesystem Development]     [Linux USB Development]     [Linux Media Development]     [Video for Linux]     [Linux NILFS]     [Linux Audio Users]     [Yosemite Info]     [Linux SCSI]

  Powered by Linux