Re: inquiry about limitation of file system

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]



On 2018-11-03, Jonathan Billings <billings@xxxxxxxxxx> wrote:
>
> Now, filesystem limits aside, software that try to read those directories with huge numbers of files are going to have performance issues. I/O operations, memory limitations and time are going to be bottlenecks to web operations. 

Just to be pedantic, it's only what Jonathan suggested that would be a
performance problem.  Typically, a web server doesn't need to read the
directory in order to retrieve a file and send it back to a client, so
that wouldn't necessarily be a performance issue.  But having too many
files in one directory would impact other operations that might be
important, like backups, finding files, or most other bulk file
operations, which would also have an effect on other processes like the
web server.  (And if the web server is generating directory listings on
the fly that would be a huge performance problem.)

And as others have mentioned, this issue isn't filesystem-specific.
There are ways to work around some of these issues, but in general it's
better to avoid them in the first place.

The typical ways of working around this issue are storing the files in a
hashed directory tree, and storing the files as blobs in a database.
There are lots of tools to help either job.

--keith

-- 
kkeller@xxxxxxxxxxxxxxxxxxxxxxxxxx


_______________________________________________
CentOS mailing list
CentOS@xxxxxxxxxx
https://lists.centos.org/mailman/listinfo/centos



[Index of Archives]     [CentOS]     [CentOS Announce]     [CentOS Development]     [CentOS ARM Devel]     [CentOS Docs]     [CentOS Virtualization]     [Carrier Grade Linux]     [Linux Media]     [Asterisk]     [DCCP]     [Netdev]     [Xorg]     [Linux USB]


  Powered by Linux