Not real confident in 3.3

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I can but here's the thing: this is a new volume with one client throwing data at it and no underlying drive, network or kernel issues. It concerns me that the fs could be compromised in less than 5 minutes!

Sean
-- 
Sent from my Android phone with K-9 Mail. Please excuse my brevity.

Brian Candler <B.Candler at pobox.com> wrote:

On Sun, Jun 17, 2012 at 08:17:30AM -0400, Sean Fulton wrote:
> It's a replicated volume, but only one client was writing one
> process to the cluster, so I don't understand how you could have a
> split brain.

The write has to be made to two places at once. From what I know (which is
not much), with native client it's the client that's responsible; with NFS I
presume it's the gluster NFS server which does it.

> The other issue is that while making a tar of the
> static files on the replicated volume, I kept getting errors from
> tar that the file changed as we read it. This was content I had
> copied *to* the cluster, and only one client node was acting on it
> at a time, so there is no chance anyone or anything was updating the
> files. And this error was coming up every 6 to 10 files.

Hmm, sounds to me like it could be another symptom of replicas being out of
sync.

Your underlying filesystem does have user_xattr support I presume?

What if you run something across both bricks which shows the size and/or
sha1sum of each file, and compare them? (Something mtree-like, or just find
| xargs sha1sum)

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gluster.org/pipermail/gluster-users/attachments/20120617/3cb91669/attachment.htm>


[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux