Re: 2GB memory limit running fsck on a +6TB device

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Theodore Tso escribió:
hmm.....  can you send me the output of dumpe2fs /dev/sdXX?  You can
run that command while e2fsck is running, since it's read-only.  I'm
curious exactly how big the filesystem is, and how many directories
are in the first part of the filesystem.
Upsss... dumpe2fs takes about 3 minutes to complete and generates about 133MB output file:

dumpe2fs 1.40.8 (13-Mar-2008)
Filesystem volume name:   <none>
Last mounted on:          <not available>
Filesystem UUID:          7701b70e-f776-417b-bf31-3693dba56f86
Filesystem magic number:  0xEF53
Filesystem revision #:    1 (dynamic)
Filesystem features: has_journal dir_index filetype sparse_super large_file
Default mount options:    (none)
Filesystem state:         clean with errors
Errors behavior:          Continue
Filesystem OS type:       Linux
Inode count:              792576000
Block count:              1585146848
Reserved block count:     0
Free blocks:              913341561
Free inodes:              678201512
First block:              0
Block size:               4096
Fragment size:            4096
Blocks per group:         32768
Fragments per group:      32768
Inodes per group:         16384
Inode blocks per group:   512
Filesystem created:       Mon Nov 13 10:12:49 2006
Last mount time:          Mon Jun  9 19:37:12 2008
Last write time:          Tue Jun 10 12:18:25 2008
Mount count:              37
Maximum mount count:      -1
Last checked:             Mon Nov 13 10:12:49 2006
Check interval:           0 (<none>)
Reserved blocks uid:      0 (user root)
Reserved blocks gid:      0 (group root)
First inode:              11
Inode size:               128
Journal inode:            8
Default directory hash:   tea
Directory Hash Seed:      afabe3f6-4405-44f4-934b-76c23945db7b
Journal backup:           inode blocks
Journal size:             32M

Some example output from group 0 to 5 is available at:

http://pastebin.com/f5341d121

How big is the filesystem(s) that you are backing up via BackupPC, in
terms of size (megabytes) and files (number of inodes)?  And how many
days of incremental backups are you keeping?  Also, how often do files
change?  Can you give a rough estimate of how many files get modified
per backup cycle?

Where are backing up several servers, near about 15 in this case, with 60-80GB data size to backup in each server and +2-3 millon inodes, with 15 day incrementals. I think near about 2-3% of the files changes each day, but I will ask for more info to the backup administrator.

I have found and old doc with some build info for this server, the partition was formated with:

   # mkfs.ext3 -b 4096 -j -m 0 -O dir_index /dev/sda4
   # tune2fs -c 0 -i 0 /dev/sda4
   # mount -o data=writeback,noatime,nodiratime,commit=60 /dev/sda4 /backup

I'm going to fetch more info about BackupPC and backup cycles, thanks Ted!!

Regards,

--
Santi Saez

_______________________________________________
Ext3-users mailing list
Ext3-users@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/ext3-users

[Index of Archives]         [Linux RAID]     [Kernel Development]     [Red Hat Install]     [Video 4 Linux]     [Postgresql]     [Fedora]     [Gimp]     [Yosemite News]

  Powered by Linux