VNA: getfattr -d -m . -e hex /tank/vmdata/datastore4/.shard getfattr: Removing leading '/' from absolute path names # file: tank/vmdata/datastore4/.shard trusted.afr.datastore4-client-0=0x000000000000000000000000 trusted.afr.dirty=0x000000000000000000000031 trusted.gfid=0xbe318638e8a04c6d977d7a937aa84806 trusted.glusterfs.dht=0x000000010000000000000000ffffffff VNB: getfattr -d -m . -e hex /tank/vmdata/datastore4/.shard getfattr: Removing leading '/' from absolute path names # file: tank/vmdata/datastore4/.shard trusted.afr.datastore4-client-0=0x000000000000000000000000 trusted.afr.dirty=0x000000000000000000000000 trusted.gfid=0xbe318638e8a04c6d977d7a937aa84806 trusted.glusterfs.dht=0x000000010000000000000000ffffffff VNG getfattr -d -m . -e hex /tank/vmdata/datastore4/.shard getfattr: Removing leading '/' from absolute path names # file: tank/vmdata/datastore4/.shard trusted.afr.datastore4-client-0=0x000000000000000000000000 trusted.afr.dirty=0x000000000000000000000031 trusted.gfid=0xbe318638e8a04c6d977d7a937aa84806 trusted.glusterfs.dht=0x000000010000000000000000ffffffff Also a updated heal info. I'd restarted VM's so there is ongoing io which always results in transient shard listings, but the .shard entry was still there gluster v heal datastore4 info Brick vnb.proxmox.softlog:/tank/vmdata/datastore4 /.shard/6559b07f-51f3-487d-a710-6acee4ec452a.2 /.shard/cfdf3ba9-1ae7-492a-a0ad-d6c529e9fb30.2131 /.shard/6633b047-bb28-471e-890a-94dd0d3b8e85.1405 /.shard/cfdf3ba9-1ae7-492a-a0ad-d6c529e9fb30.784 /.shard/2bcfb707-74a4-4e33-895c-3721d137fe5a.47 /.shard/cfdf3ba9-1ae7-492a-a0ad-d6c529e9fb30.2060 /.shard/2bcfb707-74a4-4e33-895c-3721d137fe5a.63 /.shard/cfdf3ba9-1ae7-492a-a0ad-d6c529e9fb30.2059 /.shard/2bcfb707-74a4-4e33-895c-3721d137fe5a.48 /.shard/6633b047-bb28-471e-890a-94dd0d3b8e85.1096 /.shard/6cd24745-055a-49fb-8aab-b9ac0d6a0285.47 /.shard/2bcfb707-74a4-4e33-895c-3721d137fe5a.399 Status: Connected Number of entries: 12 Brick vng.proxmox.softlog:/tank/vmdata/datastore4 /.shard/cfdf3ba9-1ae7-492a-a0ad-d6c529e9fb30.2247 /.shard/2bcfb707-74a4-4e33-895c-3721d137fe5a.49 /.shard/6cd24745-055a-49fb-8aab-b9ac0d6a0285.55 /.shard/cfdf3ba9-1ae7-492a-a0ad-d6c529e9fb30.2076 /.shard/007c8fcb-49ba-4e7e-b744-4e3768ac6bf6.569 /.shard/007c8fcb-49ba-4e7e-b744-4e3768ac6bf6.48 /.shard/6633b047-bb28-471e-890a-94dd0d3b8e85.1096 /.shard/007c8fcb-49ba-4e7e-b744-4e3768ac6bf6.568 /.shard/cfdf3ba9-1ae7-492a-a0ad-d6c529e9fb30.997 /.shard - Possibly undergoing heal /.shard/b2996a69-f629-4425-9098-e62c25d9f033.47 /.shard/007c8fcb-49ba-4e7e-b744-4e3768ac6bf6.47 /.shard/2bcfb707-74a4-4e33-895c-3721d137fe5a.47 /.shard/007c8fcb-49ba-4e7e-b744-4e3768ac6bf6.1 /.shard/cfdf3ba9-1ae7-492a-a0ad-d6c529e9fb30.784 Status: Connected Number of entries: 15 Brick vna.proxmox.softlog:/tank/vmdata/datastore4 /.shard/cfdf3ba9-1ae7-492a-a0ad-d6c529e9fb30.2133 /.shard/cfdf3ba9-1ae7-492a-a0ad-d6c529e9fb30.1681 /.shard/6633b047-bb28-471e-890a-94dd0d3b8e85.1444 /.shard/6633b047-bb28-471e-890a-94dd0d3b8e85.968 /.shard/2bcfb707-74a4-4e33-895c-3721d137fe5a.48 /.shard/6633b047-bb28-471e-890a-94dd0d3b8e85.1409 /.shard - Possibly undergoing heal /.shard/007c8fcb-49ba-4e7e-b744-4e3768ac6bf6.261 /.shard/6cd24745-055a-49fb-8aab-b9ac0d6a0285.50 /.shard/007c8fcb-49ba-4e7e-b744-4e3768ac6bf6.2 Status: Connected Number of entries: 10 thanks, On 24 June 2016 at 18:43, Krutika Dhananjay <kdhananj@xxxxxxxxxx> wrote: > Could you share the output of > getfattr -d -m . -e hex <path-to-.shard-from-the-brick> > > from all of the bricks associated with datastore4? > > -Krutika > > On Fri, Jun 24, 2016 at 2:04 PM, Lindsay Mathieson > <lindsay.mathieson@xxxxxxxxx> wrote: >> >> What does this mean? >> >> gluster v heal datastore4 info >> Brick vnb.proxmox.softlog:/tank/vmdata/datastore4 >> Status: Connected >> Number of entries: 0 >> >> Brick vng.proxmox.softlog:/tank/vmdata/datastore4 >> /.shard - Possibly undergoing heal >> >> Status: Connected >> Number of entries: 1 >> >> Brick vna.proxmox.softlog:/tank/vmdata/datastore4 >> /.shard - Possibly undergoing heal >> >> Status: Connected >> Number of entries: 1 >> >> All activity on the cluster has been shut down, no I/O, but its been >> sitting like this for a few minutes. >> >> Gluster 3.7.11 >> >> -- >> Lindsay >> _______________________________________________ >> Gluster-users mailing list >> Gluster-users@xxxxxxxxxxx >> http://www.gluster.org/mailman/listinfo/gluster-users > > -- Lindsay _______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-users