Re: Replacing failed node (2node replication)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> 
> With the commands I pasted above I had perfectly fine running volume
> which was accessible all the time during the re-adding of the new
> server, and also during the healing period (I'm using this for a
> HA-setup for a django application, which writes a lot of custom files
> while working - while the volume was being healied I made sure that all
> the webapp-traffic is hitting only glu-tru node, the one which haven't
> crashed).
> 

The volume stays accessible, but the files being healed are locked.
That's probably why your app stayed online, web apps are usually a huge number
of small-ish files, so locking them during a heal is pretty much invisible (healing a
2 Kb file being almost instant).
If you had huge files on this, without sharding, it would have beem different :)

-- 
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111

Attachment: signature.asc
Description: Digital signature

_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux