Re: Ext2/3 fs and defragmentationn

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]



Quoting Jordi Espasa Clofent <jordi.listas@xxxxxxxxxxxx>:

Because of that I've used several fs tools and my surprise was the
e2defrag utility. Until the present day I think the ext2/3 fs has not
defrag problem. But I've used defrag tools in root partition because it
was really fragmented.

¿What do you know about this kind of "trouble"?

I supose the fragmentation is ext2/3 fs is lesser problem than in Win fs
(FAT32, NTFS) but I'm not sure.

Normally, there's little need to defragment ext2/3 file system. Of course, there are always cases where type of usage of the file system will cause a lot of fragmentation, especially if you run your file systems 99% full (when there's limited space, there's not much file system can do to prevent excessive fragmentation).

Standard Unix file system model (on which ext2/3 was built) was designed to keep fragmentation under control. Actually, when fsck reports several percents of file fragmentation, that fragmentation is there intentionally (very simplified, Unix file systems will intentionally fragment large files, introducing some small fragmentation, however doing so will prevent bigger fragmentation of the files in long run and prevent large files to clog parts of the disk causing long seeks). On some Unix flavours (for example Solaris) you even have control over this "intentional" fragmentation via tunefs utility, so you could optimize file system either for storage of several huge files (by allowing them to use all space in cylinder groups), or countless small files (by limiting amount of space single file can allocate from a cylinder group before being forced to allocate space from next cylinder group).

As Tom replied already, e2defrag is kind of dangerous to use with more recent versions of ext2/3 file systems. And even if it wasn't, there are cases where total defragmentation of ext2 or ext3 file system would hurt performance. In short doing such thing would allow large files to clog parts of the file system, resulting in long seek times for small(er) files that live in the same directory as big file. Historically, Unix file systems will try to allocate space for files in same cylinder group in which directory entry lives, in order to avoid expensive long disk seeks. If you allow single file to eat all available space in cylinder group (by not fragmenting it), accessing all other files in that same directory will be slow (and you are not going to gain much performance by having 100% continuous huge file anyhow).


_______________________________________________
CentOS mailing list
CentOS@xxxxxxxxxx
http://lists.centos.org/mailman/listinfo/centos


[Index of Archives]     [CentOS]     [CentOS Announce]     [CentOS Development]     [CentOS ARM Devel]     [CentOS Docs]     [CentOS Virtualization]     [Carrier Grade Linux]     [Linux Media]     [Asterisk]     [DCCP]     [Netdev]     [Xorg]     [Linux USB]
  Powered by Linux