Search squid archive

RE: Recurrent crashes and warnings: "Your cache is running out of filedescriptors"

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> > Perhaps you are running out of inodes?
> >
> > "df -i" should give you what you are looking for.
>
>
> Well done. df reports indeed that I am out of inodes (100% used).
> I've seen that a Sarg daily report contains about 170'000 files. I am
> starting tar.gzipping them.
>
> Thank you very much Jenny.
>
>
> Leonardo
 

Glad this solved. Actually you could increase inode max (i think it was double/triple of /proc/sys/fs/file-max setting).
 
However, 170,000 files on a directory on a mechanical drive will make things awfully slow.
 
Also, ext4 is preferable since deletes are done at the background. Our tests on an SSD with ext3 took 9 mins to delete 1 million files. It was about 7 secs on ext4.
 
Whenever we need to deal with high number of files (sometimes in the tune of 100 Million), we move them to an SSD with ext4 and perform operations there. And yes, that moving part... is very painful also unless the files were already tarred :)
 
Let me give you an example. Process 1 Million files on a single directory (read, write, split to directories, archive):
 
HDD: 6 days
SSD: 4 hours
 
Jenny 
  		 	   		  


[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux