Re: Gluster Recovery

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Krishna,

Many Thanks for your answer

The concern here is the following though:

Two separate clients are identically configured to use AFR to two identical server configurations as follows:
             Server1
           /         \
Client1 ---           ---Client2
           \         /
             Server2

Client1 puts "hello.c" onto both Server1 and Server2 via AFR.  Client2 then changes hello.c in some way.
Server1 goes down; data lost, no chance of recovery and is replaced by Server3, a brand new server with fresh disks.
In this case, how does the data get reconstructed from the client's side because you mentioned that the automatic recovery was going to be on the glusterfs side.  Client1 believes hello.c is something different to what Client2 believes.  Which client will responsibly reconstruct the data?  Will the journaling of the remaining servers be used to reconstruct the data on the new server?

Regards,
Danson Joseph


On Tue, 24 Apr 2007 01:16:58 +0530, "Krishna Srinivas" <krishna@xxxxxxxxxxxxx> wrote:
> Hi Danson,
> 
> Answer inline..
> 
> On 4/23/07, Danson Michael Joseph <danson.joseph@xxxxxxxxxxxxxxxxxx>
> wrote:
>> Hi,
>>
>> I recently compiled and tested Gluster 1.3 on Ubuntu 7.04 and all went
> well.
>> Originally I intended to use DRBD in our small cluster of 2 machines but
>> couldn't get the module to compile.  The feature needed in DRBD is
> really the
>> self-recovery after failure.  SO after choosing gluster, I'm looking for
>> answers to two operational questions:
>>
>> 1) If DRBD rolls over from slave to primary and then the primary goes
> online
>> again, the original primary will become slave and copy/resync with the
> old
>> slave which is now primary.  In gluster is this as simple as using a
> rsync
>> script to achieve the same?
> 
> In glusterfs, as of now resync can be done using rsync. In future we will
> have automated builtin facility in glusterfs to do this.
> 
>>
>> 2) Lustre uses metadata on two or distributed MetaData Servers.  I
> presume this
>> means that if a storage node fails and I go out and buy a new machine
> and plug
>> it in, the MDS server will re-populate the new server with what was on
> the old
>> server, based on it's MDS knowledge and the replicated data on other
> servers.
>> If gluster has no MDS, how can replication take place?  There seems to
> be know
>> "knowledge" of the system like a MDS server has?
> 
> We do not need metadata information to resync. For example, if we are
> doing
> AFR on 2 nodes and we replace one of the nodes, the recovery module will
> get to know that one of them is new, so it will copy all the files and
> directories
> to this new server. We need to see what all cases we need to handle,
> this feature
> is in our road map.
> 
> Regards
> Krishna
> 
>>
>> Regards,
>>
>> --
>> *********************************
>> Danson Michael Joseph
>> danson.joseph@xxxxxxxxxxxxxxxxxx
>> PO BOX 1768, BEDFORDVIEW, 2008, SOUTH AFRICA
>> +27 82 820 4261
>>
>>
>>
>> _______________________________________________
>> Gluster-devel mailing list
>> Gluster-devel@xxxxxxxxxx
>> http://lists.nongnu.org/mailman/listinfo/gluster-devel
>>
--
*********************************
Danson Michael Joseph
danson.joseph@xxxxxxxxxxxxxxxxxx
PO BOX 1768, BEDFORDVIEW, 2008, SOUTH AFRICA
+27 82 820 4261





[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux