On Thu, 15 Feb 2007 14:46:34 -0500, Jan Engelhardt
<jengelh@xxxxxxxxxxxxxxx> wrote:
On Feb 15 2007 21:38, Andi Kleen wrote:
Also I would expect your design to be slow for metadata read intensive
workloads. E.g. have you tried to boot a root partition with dual fs?
That's a very important IO benchmark for desktop Linux systems.
Did someone say metadata intensive? Try kernel tarballs, they're a
perfect workload. Tons of directories, and even more files here and
there. Works wonders.
I just did now per your request. To make things more relevant I
created a file structure from the 2.4.19 kernel sources and repeated it
recursively into the deepest dir level (10) 4 times ending up with
7280 directories with 40 levels of directories depth and 1 GB
data set size. I run both tar and untar operations on the tree
for ext3, reiserfs, jfs and DualFS. I remounted the FS before
each test. I end up with 7280 directories 40 levels depth and
1 GB data. Both tar file and directory tree were on the FS under
test.
Here are the results - elapse time in sec:
tar untar
ext3: 144 143
reiserfs: 100 100
JFS: 196 140
DualFS: 63 54
Hope this helps.
Jan
/Sorin
--
Using Opera's revolutionary e-mail client: http://www.opera.com/mail/
-
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html