Re: 3.8.2 : Node not healing

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



In the past half hour its started to heal. Down to 1639 shards now.

Quick question - would running "gluster v heal datastore4 statistics
heal-count' on a 5 second loop block healing?

To answer my own question - I don't think so as it appear to be
healing quite quickly now.


On 15 August 2016 at 17:17, Krutika Dhananjay <kdhananj@xxxxxxxxxx> wrote:
> Could you please attacj the brick logs and glustershd logs?

Will get it together shortly

> Also share the volume configuration please (`gluster volume info`).


Volume Name: datastore4
Type: Replicate
Volume ID: 0ba131ef-311d-4bb1-be46-596e83b2f6ce
Status: Started
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: vnb.proxmox.softlog:/tank/vmdata/datastore4
Brick2: vng.proxmox.softlog:/tank/vmdata/datastore4
Brick3: vna.proxmox.softlog:/tank/vmdata/datastore4
Options Reconfigured:
cluster.locking-scheme: granular
cluster.granular-entry-heal: on
cluster.background-self-heal-count: 16
features.shard-block-size: 64MB
network.remote-dio: enable
cluster.eager-lock: enable
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
performance.stat-prefetch: on
performance.strict-write-ordering: off
nfs.enable-ino32: off
nfs.addr-namelookup: off
nfs.disable: on
cluster.server-quorum-type: server
cluster.quorum-type: auto
features.shard: on
cluster.data-self-heal: on
cluster.self-heal-window-size: 1024
performance.readdir-ahead: on





-- 
Lindsay
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users



[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux