Re: [Gluster-devel] failed heal

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Feb 05, 2015 at 11:21:58AM +0530, Pranith Kumar Karampuri wrote:
> 
> On 02/04/2015 11:52 PM, David F. Robinson wrote:
> >I don't recall if that was before or after my upgrade.
> >I'll forward you an email thread for the current heal issues which are
> >after the 3.6.2 upgrade...
> This is executed after the upgrade on just one machine. 3.6.2 entry locks
> are not compatible with versions <= 3.5.3 and 3.6.1 that is the reason. From
> 3.5.4 and releases >=3.6.2 it should work fine.

Oh, I was not aware of this requirement. Does it mean we should not mix
deployments with these versions (what about 3.4?) any longer? 3.5.4 has
not been released yet, so anyone with a mixed 3.5/3.6.2 environment will
hit these issues? Is this only for the self-heal daemon, or are the
triggered/stat self-heal procedures affected too?

It should be noted *very* clearly in the release notes, and I think an
announcement (email+blog) as a warning/reminder would be good. Could you
get some details and advice written down, please?

Thanks,
Niels


> 
> Pranith
> >David
> >------ Original Message ------
> >From: "Pranith Kumar Karampuri" <pkarampu@xxxxxxxxxx
> ><mailto:pkarampu@xxxxxxxxxx>>
> >To: "David F. Robinson" <david.robinson@xxxxxxxxxxxxx
> ><mailto:david.robinson@xxxxxxxxxxxxx>>; "gluster-users@xxxxxxxxxxx"
> ><gluster-users@xxxxxxxxxxx <mailto:gluster-users@xxxxxxxxxxx>>; "Gluster
> >Devel" <gluster-devel@xxxxxxxxxxx <mailto:gluster-devel@xxxxxxxxxxx>>
> >Sent: 2/4/2015 2:33:20 AM
> >Subject: Re: [Gluster-devel] failed heal
> >>
> >>On 02/02/2015 03:34 AM, David F. Robinson wrote:
> >>>I have several files that gluster says it cannot heal. I deleted the
> >>>files from all of the bricks
> >>>(/data/brick0*/hpc_shared/motorsports/gmics/Raven/p3/*) and ran a full
> >>>heal using 'gluster volume heal homegfs full'.  Even after the full
> >>>heal, the entries below still show up.
> >>>How do I clear these?
> >>3.6.1 Had an issue where files undergoing I/O will also be shown in the
> >>output of 'gluster volume heal <volname> info', we addressed that in
> >>3.6.2. Is this output from 3.6.1 by any chance?
> >>
> >>Pranith
> >>>[root@gfs01a ~]# gluster volume heal homegfs info
> >>>Gathering list of entries to be healed on volume homegfs has been
> >>>successful
> >>>Brick gfsib01a.corvidtec.com:/data/brick01a/homegfs
> >>>Number of entries: 10
> >>>/hpc_shared/motorsports/gmics/Raven/p3/70_rke/Movies
> >>><gfid:a6fc9011-74ad-4128-a232-4ccd41215ac8>
> >>><gfid:bc17fa79-c1fd-483d-82b1-2c0d3564ddc5>
> >>><gfid:ec804b5c-8bfc-4e7b-91e3-aded7952e609>
> >>><gfid:ba62e340-4fad-477c-b450-704133577cbb>
> >>><gfid:4843aa40-8361-4a97-88d5-d37fc28e04c0>
> >>><gfid:c90a8f1c-c49e-4476-8a50-2bfb0a89323c>
> >>><gfid:090042df-855a-4f5d-8929-c58feec10e33>
> >>>/hpc_shared/motorsports/gmics/Raven/p3/70_rke/.Convrg.swp
> >>>/hpc_shared/motorsports/gmics/Raven/p3/70_rke
> >>>Brick gfsib01b.corvidtec.com:/data/brick01b/homegfs
> >>>Number of entries: 2
> >>><gfid:f96b4ddf-8a75-4abb-a640-15dbe41fdafa>
> >>>/hpc_shared/motorsports/gmics/Raven/p3/70_rke
> >>>Brick gfsib01a.corvidtec.com:/data/brick02a/homegfs
> >>>Number of entries: 7
> >>><gfid:5d08fe1d-17b3-4a76-ab43-c708e346162f>
> >>>/hpc_shared/motorsports/gmics/Raven/p3/70_rke/PICTURES/.tmpcheck
> >>>/hpc_shared/motorsports/gmics/Raven/p3/70_rke/PICTURES
> >>>/hpc_shared/motorsports/gmics/Raven/p3/70_rke/Movies
> >>><gfid:427d3738-3a41-4e51-ba2b-f0ba7254d013>
> >>><gfid:8ad88a4d-8d5e-408f-a1de-36116cf6d5c1>
> >>><gfid:0e034160-cd50-4108-956d-e45858f27feb>
> >>>Brick gfsib01b.corvidtec.com:/data/brick02b/homegfs
> >>>Number of entries: 0
> >>>Brick gfsib02a.corvidtec.com:/data/brick01a/homegfs
> >>>Number of entries: 0
> >>>Brick gfsib02b.corvidtec.com:/data/brick01b/homegfs
> >>>Number of entries: 0
> >>>Brick gfsib02a.corvidtec.com:/data/brick02a/homegfs
> >>>Number of entries: 0
> >>>Brick gfsib02b.corvidtec.com:/data/brick02b/homegfs
> >>>Number of entries: 0
> >>>===============================
> >>>David F. Robinson, Ph.D.
> >>>President - Corvid Technologies
> >>>704.799.6944 x101 [office]
> >>>704.252.1310 [cell]
> >>>704.799.7974 [fax]
> >>>David.Robinson@xxxxxxxxxxxxx <mailto:David.Robinson@xxxxxxxxxxxxx>
> >>>http://www.corvidtechnologies.com <http://www.corvidtechnologies.com/>
> >>>
> >>>
> >>>_______________________________________________
> >>>Gluster-devel mailing list
> >>>Gluster-devel@xxxxxxxxxxx
> >>>http://www.gluster.org/mailman/listinfo/gluster-devel
> >>
> 

> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel@xxxxxxxxxxx
> http://www.gluster.org/mailman/listinfo/gluster-devel

Attachment: pgpIUI6Gsawg6.pgp
Description: PGP signature

_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux