Andreas Dilger wrote:
On May 25, 2006 14:44 -0700, Ric Wheeler wrote:
With both ext3 and with reiserfs, running a single large file system
translates into several practical limitations before we even hit the
existing size limitations:
....
I know that other file systems deal with scale better, but the question
is really how to move the mass of linux users onto these large and
increasingly common storage devices in a way that handles these challenges.
In a way what you describe is Lustre - it aggregates multiple "smaller"
filesystems into a single large filesystem from the application POV
(though in many cases "smaller" filesystems are 2TB). It runs e2fsck
in parallel if needed, has smart object allocation (clients do delayed
allocation, can load balance across storage targets, etc), can run with
down storage targets.
Cheers, Andreas
--
Andreas Dilger
Principal Software Engineer
Cluster File Systems, Inc.
The approach that lustre takes here is great - distributed systems
typically take into account subcomponent failures as a fact of life &
do this better than many single system designs...
The challenge is still there on the "smaller" file systems that make up
Lustre - you can spend a lot of time waiting for just one fsck to finish ;-)
ric
-
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html