Re: Sparse Files and Heal

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks a lot Pranith.  Could you CC me the bug as well because I am very
interested in the status.
I'm getting the same issue since the middle of this year
(http://gluster.org/pipermail/gluster-users.old/2014-March/016322.html) so I
hope this can be fixed.



Thanks,
Adrian

-----Original Message-----
From: Pranith Kumar Karampuri [mailto:pkarampu@xxxxxxxxxx] 
Sent: Saturday, November 22, 2014 11:49 PM
To: Adrian Kan; 'Lindsay Mathieson'; gluster-users@xxxxxxxxxxx
Subject: Re:  Sparse Files and Heal


On 11/22/2014 01:17 PM, Adrian Kan wrote:
> Pranith,
>
> I'm wondering if this is a better method to take down a brick for 
> maintenance purpose and reheal:
>
> 1) Detach the brick from the cluster - gluster volume remove-brick
> datastore1 replica 1 brick1:/mnt/datastore1
> 2) Take down the brick1
> 3) Do whatever maintenance needed to brick1
> 4) Turn the brick1 back on
> 5) I'm pretty sure glusterfs would not allow brick1 to be re-attached 
> to the cluster because there are attributes set in the volume.  The 
> only way is to remove everything in it.
> 6) Re-attach brick1 after emptying the directory in brick1 - gluster 
> volume add-brick datastore1 replica brick1:/mnt/datastore1
> 7) Initiate full heal
Best method is just 2), 3), 4). The only bug that is preventing that from
happening now is 'full' heal filling sparse regions of the file, which will
be fixed shortly, we even identified the fix.

Pranith
>
>
> Thanks,
> Adrian
>
> -----Original Message-----
> From: gluster-users-bounces@xxxxxxxxxxx 
> [mailto:gluster-users-bounces@xxxxxxxxxxx] On Behalf Of Lindsay 
> Mathieson
> Sent: Saturday, November 22, 2014 3:35 PM
> To: gluster-users@xxxxxxxxxxx
> Subject: Re:  Sparse Files and Heal
>
> On Sat, 22 Nov 2014 12:54:48 PM you wrote:
>> Lindsay,
>>        You said, you restored it from some backup. How did you do that?
>> If you copy the VM image from back up to the location where you 
>> deleted it from on the brick directly. Then the VM hypervisor still 
>> doesn't write to the new file that is copied. Basically we need to 
>> make the mount close old fd that was opened on the VM(now deleted on 
>> one
> of the bricks).
>
>
> I stopped the the VM and the restore creates an image with a new name, 
> so it should be fine.
>
> thanks,
> --
> Lindsay
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users@xxxxxxxxxxx
> http://supercolony.gluster.org/mailman/listinfo/gluster-users

_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://supercolony.gluster.org/mailman/listinfo/gluster-users




[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux