Re: files mapped funny? (related to online defragmentation)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, 2007-05-21 at 06:49 -0700, Eric wrote:
> Hi,
> 
> I'm getting strange results when I map out the blocks used in files
> larger than a several thousand KB. I never seem to get any more than
> 1024 contiguous data blocks in a row. 
> 
> Here's a portion of the output of my script when I run it on a 176MB
> file in my home directory:
> ...
> Contiguous chunk 67: 2385568 - 2385591  (24 blocks)
> Contiguous chunk 68: 2385608 - 2386448  (841 blocks)
> Contiguous chunk 69: 2386450 - 2387473  (1024 blocks)
> Contiguous chunk 70: 2387475 - 2388498  (1024 blocks)
> Contiguous chunk 71: 2388500 - 2389523  (1024 blocks)
> ...
>
> Maybe this is a bug in my script? Can anyone explain why this would
> happen?
> 
filefrag command comes with e2fsprogs will print the file fragmentation
info. I guess you can try filefrag -v command and see if that matches
what your scripts reported.

Mingming

> I'm attaching my script in case other ext2/3/4 newbies can get any use
> out of it, and in case anyone needs to see it in order to answer my
> question. It's pretty self-explanatory, though.
> 
> Cheers,
> 
> Eric
> 

-
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Reiser Filesystem Development]     [Ceph FS]     [Kernel Newbies]     [Security]     [Netfilter]     [Bugtraq]     [Linux FS]     [Yosemite National Park]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Device Mapper]     [Linux Media]

  Powered by Linux