Re: gfs2 v. zfs?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Jan 26, 2011 at 8:59 AM, Steven Whitehouse <swhiteho@xxxxxxxxxx> wrote:

> Nevertheless, I agree that it would be nice to be able to move the
> inodes around freely. I'm not sure that the cost of the required extra
> layer of indirection would be worth it though, in terms of the benefits
> gained.
>

If the cost is about possible performance hits ....say it is y%. Let's
take the difference between GFS2 (performance) numbers and other
filesystem's number that users love to compare .. assume it is x%.
Regardless GFS2 is better or worse, what really matters  .. is  ...
"does (x+y)% or (x-y)% make any difference ?" and "what will this y%
buy ?" . If I do a guess, I would say x is close to 20 and y is close
to 3. So does "23 vs 20" or "17 vs 20" make differences ?

On the other hand, what can this "y"  buy  ? ... an infrastructure to
shrink the filesystem (if users not on thin-provision SAN), better
backup strategy (snapshots have its catches), a straightforward
defragmentation tool, *AND* a possibility to group the scattered
inodes within a directory into a sensible (disk) layout such that ...
each time a directory read is issued (e.g. the "ls" cmd family), it
can give enough hints to the underline SAN to trigger its own
readahead engine. ... say you want to read inodes in a huge directory
but part of these inodes are out in other nodes with exclusive glocks
held. You can still read in the rest of these inodes and the reading
pattern may be good enough to trigger the readahead code within the
SAN. By the time these exclusive glocks start to sync their blocks,
these blocks are already in SAN's cache. Many rounds of disk reads
(from SAN point of view) can be avoided. At the same time, if these
to-be-write inodes are close to each other in a reasonable layout. it
helps SAN's writes as well.

Something to think ....

-- Wendy

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster


[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux