Hi Brent, Did you see that problem again? what was the kind of setup you were using? I am not sure which part of the code might have caused the problem. Further details regarding the setup will help. Thanks Krishna On 5/8/07, Brent A Nelson <brent@xxxxxxxxxxxx> wrote:
I just had two nodes go down (not due to GlusterFS). The nodes were mirrors of each other for multiple GlusterFS filesystems (all unify on top of afr), so the GlusterFS clients were understandably unhappy (one of the filesystems was 100% served by these two nodes, others were only fractionally served by the two nodes). However, when the two server nodes were brought back up, some of the client glusterfs processes recovered, while others had to be kill -9'ed so the filesystems could be remounted (they were blocking df and ls commands). I don't know if it's related to the bug below or not, but it looks like client reconnect after failure isn't 100%... This was from a tla checkout from yesterday. Thanks, Brent On Mon, 7 May 2007, Krishna Srinivas wrote: > Hi Avati, > > There was a bug - when the 1st node went down, it would cause > problem. This bug might be the same, the bug reporter has > not given enough details to confirm though. We can move the > bug to unreproducible or fixed state. > > Krishna > > On 5/6/07, Anand Avati <INVALID.NOREPLY@xxxxxxx> wrote: >> >> Update of bug #19614 (project gluster): >> >> Severity: 3 - Normal => 5 - Blocker >> Assigned to: None => krishnasrinivas >> >> _______________________________________________________ >> >> Follow-up Comment #1: >> >> krishna, >> can you confirm if this bug is still lurking? >> >> _______________________________________________________ >> >> Reply to this item at: >> >> <http://savannah.nongnu.org/bugs/?19614> >> >> _______________________________________________ >> Message sent via/by Savannah >> http://savannah.nongnu.org/ >> >> > > > _______________________________________________ > Gluster-devel mailing list > Gluster-devel@xxxxxxxxxx > http://lists.nongnu.org/mailman/listinfo/gluster-devel >