Big File System , fsck and superblock problem

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello , 

 

I will use more then 1 TB disk space for our mail server actually it will
pass or closer 2TB . I know that ext3 support 8TB on rhel4 but I'm afraiding
from big filesystem , I know journaling supported with ext3 but if something
happened and must fsck run those 2TB or more TB partition or volume then it
will take too much time also because of application is mail server and too
many small file.

 

                Maybe there is a way to move and link some data to another
partition and do not increate one partition size but I'm trying to find a
way to keep file system always clear and at the any failer mount partition
with min time. Ext3,xfs,jfs,riserfs are all journaling system and I do not
think that they are better each other at the this like situation also I know
that at the running time I couldn't make background fsck how ufs2 do on
freebsd.

 

                My questions are ;

 

                 İs there any way to divide disk or  storage  many small
partitions and collect together those partitions and have big space but
expect that some of fails can be on one of those partitions and only that
partition fsck can be enough ? 

 

                İs there any way to talk with journaling and force write
down all thing what it is logged to disk manuelly ? Maybe this way I can out
something cron and sync logged data to disk myself ?

 

                Or usinf ufs can be best ? 

 

Thanks 

Vahric

 

                

-- 
redhat-list mailing list
unsubscribe mailto:redhat-list-request@xxxxxxxxxx?subjecthttps://www.redhat.com/mailman/listinfo/redhat-list


[Index of Archives]     [CentOS]     [Kernel Development]     [PAM]     [Fedora Users]     [Red Hat Development]     [Big List of Linux Books]     [Linux Admin]     [Gimp]     [Asterisk PBX]     [Yosemite News]     [Red Hat Crash Utility]


  Powered by Linux