Re: A few GFS newbie questions: journals, etc

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



>    node 1 accesses gfs mount/some_dir
>    node 2 accesses gfs mount/some_other_dir
>    node 3 accesses gfs mount/yet_some_other_dir
This won't totally solve the same problem - for two reasons.

Firstly, the metadata for "some_dir" and "some_other_dir" are going to be
stored in the same place ("mount"'s directory) - which means that access
time, permission changes, and other metadata will be cached within each of
the split brains quite happily.

The more serious problem is new/deleted files - since the 2 mini-clusters
will think they are allocating disk space from a device they are solely
using, they will each start allocating from the same place (e.g. nodes 1-3
could use block offset 12345 for new file /mount/some_dir/file1, at the
same time as nodes 4-6 usr block offset 12345 for new file
/mount/yet_some_other_dir/file2)

While each half uses the cached data, you'll just get data corruption (the
contents of the file will be the last one written).  However, as soon as
they go back to the disk to look things up they may notice the cross
linked files, and either whinge or die.  And when you next fsck, it'll
split them out and that's the first time you'll definitely know.

If you're just after sharing the space on disk (and not sharing the data
within the partitions) then clvm may be the answer - the only thing you
won't be able to do (from memory) without quorum is resize the partitions.

Run ext3 on top of that for each node?

--

Linux-cluster@xxxxxxxxxx
http://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux