Re: Fastest filesystem on linux

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





I have tried this....it works fine....but as i have SCSI disk it makes almost not difference (at least no with ext2) i have now come to the problem of how many files inside a directory can hold a fs before the directory gets too heavy to read (as you do lots of read/write operations) my application can generate more that 200000 files in the same directory, if this gets heavy in rw, i could try to separate the files inside various subdirectories....


How big is these files ? If you compare some existing filesystems, you can find that
Reiserfs is the best in supporting operations on directories having large number of
small files. XFS also handles this better than ext2/ext3, but reiserfs is the best in these
kind of situations. It also depends on the system call; for example, unlink() recursively
is found to be slow on XFS - but very fast on ReiserFS.


http://www.informatik.uni-frankfurt.de/~loizides/reiserfs/oldpage/reiser-vs-xfs.html

XFS
 ---- Few  Big   files = HIGH performance
 ---- Many Small files = LOW   performance

Reiserfs
 ---- Many Small files = HIGH  performance
 ---- Few  Big   files = LOW  performance

Reiserfs filesystem for the partition and a moderate level of subdirectory implementation
will be a good solution for your problem.


Regards

Suneesh




-- Kernelnewbies: Help each other learn about the Linux kernel. Archive: http://mail.nl.linux.org/kernelnewbies/ FAQ: http://kernelnewbies.org/faq/


[Index of Archives]     [Newbies FAQ]     [Linux Kernel Mentors]     [Linux Kernel Development]     [IETF Annouce]     [Git]     [Networking]     [Security]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux SCSI]     [Linux ACPI]
  Powered by Linux