Two bricks in one replicated volume, different sizes, what to do?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



That seemed to work but they are still off by about 700 megs. Is this normal?

Server 1:

Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/md0             412847280   7592344 384283520   2% /ebs_raid

Server 2:

Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/md0             412847280   7520388 384355476   2% /ebs_raid


--
Pierre-Luc Brunet


On 2010-12-21, at 11:26 AM, Vijay Bellur wrote:

> On Tuesday 21 December 2010 09:27 PM, Pierre-Luc Brunet wrote:
>> Heya,
>> 
>> I have one replicate gluster volume made of two bricks. Both servers are live and connected right now but to my surprise, they are reporting different Used/Available statistics. I don't seem to be running into any issues on the client ends but I'd like to know what's the safest way to sync all the files so that both volumes have the same content, without causing downtime to my file servers (preferably).
>> 
>> I tried doing "ls -lahR * .*" on one of the client but it didn't get everything copied over.
>> 
>>   
> 
> Can you please trigger self-heal by following instructions at:
> 
> http://www.gluster.com/community/documentation/index.php/Gluster_3.1:_Triggering_Self-Heal_on_Replicate
> 
> Regards,
> Vijay




[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux