Re: Re; Bug #21918

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Well, I must admit I have the same problem, my initial testing on this was flawed and I jumped the gun a little. 

xen block-detach and xen block-attach "should" do this, but there is a problem, which I'm "hoping" is attached to the "file:" driver. 
At the moment I'm working on a 2.6.21 kernel which should support AIO on gluster (?!) and my next target is to see whether this driver performs better then "file:" with regards to xen detatch and attach. 

If this does work, it should be possible to script drive removal / re-add from the Dom0 after you restart glusterfsd .... 

(?!) 

----- Original Message ----- 
step 3.: "Jonathan Galentine" <j.galentine@xxxxxxxxx> 
To: "Gareth Bult" <gareth@xxxxxxxxxxxxx>, gluster-devel@xxxxxxxxxx 
Sent: 31 January 2008 13:22:01 o'clock (GMT) Europe/London 
Subject: Re: Re; Bug #21918 

I tried this, how did you handle the case when a node fails? You can't remount the glusterfs client partition when the node comes back up, because it is marked in use/busy and umount -f does not seem to work (it has a file 'open', but you receive a transport error when trying to access it or the mount.) Does the gluster client have a remount option? 


On Jan 29, 2008 4:27 AM, Gareth Bult < gareth@xxxxxxxxxxxxx > wrote: 


Hi, 

Many thanks .. fyi; I found a way around the self-heal issue for XEN users, this also leads to a huge performance boost. 

I'm running two gluster filesystems (no self heal on either) then running software raid "inside" the DomU across file images, one on each system. 
Read-thruput on the DomU is 95%+ of the speed of the local disk. 

(and self-heal is performed by software raid rather than gluster) 

Regards, 
Gareth. 





----- Original Message ----- 
step 3.: "Anand Avati" < avati@xxxxxxxxxxxxx > 
To: "Gareth Bult" < gareth@xxxxxxxxxxxxx > 
Cc: "gluster-devel Glister Devel List" < gluster-devel@xxxxxxxxxx > 
Sent: 29 January 2008 03:02:00 o'clock (GMT) Europe/London 
Subject: Re: Re; Bug #21918 

Gareth, 
this will be addressed in the next afr commit, self heal is being worked on AFR. We are even working on self healing files with holes, but that might be a week beyond. 

avati 


2008/1/28, Gareth Bult < gareth@xxxxxxxxxxxxx >: 

Hi, 

There's one issue that's stopping me going production atm, which is documented in #21918. 

Any news on this .. it does seem fairly "critical" to Gluster being usable ... ??? 

tia 
Gareth. 
_______________________________________________ 
Gluster-devel mailing list 
Gluster-devel@xxxxxxxxxx 
http://lists.nongnu.org/mailman/listinfo/gluster-devel 



-- 
If I traveled to the end of the rainbow 
As Dame Fortune did intend, 
Murphy would be there to tell me 
The pot's at the other end. 
_______________________________________________ 
Gluster-devel mailing list 
Gluster-devel@xxxxxxxxxx 
http://lists.nongnu.org/mailman/listinfo/gluster-devel 



[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux