AFR not working

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I have just tried a re-sync of many GB using the ls -lR and the ls -lR 
is hanging while the sync happens.
In the release notes for 3.0.0 is says

In GlusterFS 2.0.x, if self-healing is required e.g. when a failed 
Replicate server recovered,
the first I/O command executed after recovery, such as a 'ls -l', that 
triggered the self-healing
would block until the self-heal operation completed. With v3.0, 
self-healing is now done in
the background. Commands that can trigger self-healing will thus appear 
more responsive
resulting in a better user experience. Replicated VM images also benefit 
from this because
they can continue to run even while the image is self-healed on a failed 
server.

Do i need to configure anything to get this behaviour?


Vikas Gorur wrote:
> Adrian Revill wrote:
>> Thanks Vikas,
>>
>> You were right, i commented out the stat-prefetch section and the 
>> sync now works. Perhaps the glusterfs-volgen should not put it in for 
>> raid 1
> We'll review this and see if we can tweak stat-prefetch to allow 
> replicate sync's
> to happen sooner.
>> So it looks like to make a fully redundant system we need to poll the 
>> client mount points with ls -lR at least at server startup.
>> For scalability, the servers should run the client mount on itself 
>> and poll itself.
>> Surely a feature to trigger a full re-sync should be part of the 
>> server daemon, perhaps first client to connect gets a please sync me 
>> message.
> That's something to think about. I'm not sure if such a re-sync
> is best done by GlusterFS or by an external tool.
>> I found a little gotcha with file deletion
>>
>> 2 servers running, create a file, and shut down server2, delete the 
>> file and shut down server1. Start server2 and the file reappears, 
>> which is expected, start server1 and the file remains and is synced 
>> back to server1.
>> This means in case of a server failure, server restarts,  the order 
>> that servers are restarted is very important if the un-deletion of 
>> files is an issue.
> I'll look into this. GlusterFS in general takes a conservative 
> approach and
> when in doubt prefers to retain data than deleting.
>> I also found that lsattr does not work with glusterfs
>> lsattr /mnt/export/
>> lsattr: Inappropriate ioctl for device While reading flags on 
>> /mnt/export/t2
>
> lsattr is a tool specific to Ext2/Ext3 filesystems and will not work on
> other filesystems (be they fuse-based or disk-based like ReiserFS or 
> XFS).
>
> Vikas
>
> ______________________________________________________________________
> This email has been scanned by the MessageLabs Email Security System.
> For more information please visit http://www.messagelabs.com/email 
> ______________________________________________________________________

______________________________________________________________________
This email has been scanned by the MessageLabs Email Security System.
For more information please visit http://www.messagelabs.com/email 
______________________________________________________________________


[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux