Re: mke2fs options for very large filesystems

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 5 Mar 2005, Markus Peuhkuri wrote:
> Markus Peuhkuri wrote:
>
>> fragmentation without unmounting fs (as it is not possible to bring
>> system down for any long period of time)?
>
> Thanks Joseph for pointing for filefrag (the system is running woody, 
> that has older e2fsprogs, but found 1.35 from backports.org).
>
> However, I'm a bit unsure how to interept figures (man page is quite 
> short). As I run the filefrag for system, I get on average 1000 extents
> (for those 500 MB files) and perfection seems to be 2 to 5.  I think 
> "extent" is count of "segments" or count of fragments file is stored on 
> disk.  Thus average size for a fragment is about half megabyte, while 
> average size of each file ranges from 100 kB to 1.5 MB.

An extent is a single contiguous run of disk sectors.  Since ext3 breaks
up the disk with inode tables, block groups, and other bits of
accounting stuff, a file that size can't simply run as one single
extent.

So, your files have around a thousand locations when, in theory, they
could fit in no more than 2 to 5 on a perfectly clean filesystem.

> Is that figure bad? 

...er, maybe?  Is it causing performance problems for you?  Do you need
to increase throughput, or anything like that?

If the answer to those questions isn't "yes", then that isn't bad.

> Would it help to periodicaly clean disk more? 

Sure.  Reformat it occasionally, and you will get less fragmentation. :)

[...]

> I also tested defragmenting Berkeley DB files using "cp -rp db db.new":
> the other db improved quite a lot, but for other it did not help much.  

This works as long as suitable extents can be found;  they can't always.
The less full the filesystem is, the better this works. 

[...]

> Many disk I have, are mainly for archive purposes, thus a file is stored
> there once (one at time) and the file will stay there as long as disk 
> works, only read every now and then.  I think in that use the 
> fragmentation does not have a lot?  

Correct.

> Maybe only the few last files are fragmented and have lower
> performance.

Files written later, after the disk is already mostly full, will tend to
have more fragmentation, because they have less chance of finding a
large enough contiguous chunk of disk.

Also, files that are written slowly tend toward fragmentation, since the
kernel may not predict the final size well, so may put them in too small
an area initially.

Again, the "problem" part of your question depends on use:  unless you
actually want to improve performance, don't bother about fragmentation.

Regards,
        Daniel

-- 
Most American television stations reproduce all night long what only
a Roman could have seen in the Coliseum during the reign of Nero.
	-- George Faludy

_______________________________________________

Ext3-users@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/ext3-users

[Index of Archives]         [Linux RAID]     [Kernel Development]     [Red Hat Install]     [Video 4 Linux]     [Postgresql]     [Fedora]     [Gimp]     [Yosemite News]

  Powered by Linux