Gluster Errors and configuraiton status

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Dear Gluster team,

 

Since January 2018 I am running GLusterFS in mode with 4 nodes.

The storage is attached to oVirt system and has been running happily so far.

 

I have three volumes:

Gv0_she – triple replicated volume for oVIrt SelfHostedEngine (it’s a requirement)

Gv1_vmpool – distributed volume across all four nodes for Guest VMs

Gv2_vmpool – distributed-replicated volume across all four nodes (this one is use and is to be a replacement for Gv1_vmpool) – created 4weeks ago

 

Quesitons:

 

  1. What is the best (recommended) way to monitor performance and for issues over volumes and its images?
  2. From the logs files, I see the glusterd log files and per brick/volume log files: so far it seemed that main focus should be on brick/volume logs
  3. Some of the Errors I saw and I do not know yet enough to explain the criticality or how to slove them:

 

[2018-10-23 10:49:26.747985] W [MSGID: 113096] [posix-handle.c:770:posix_handle_hard] 0-gv2_vmpool-posix: link /gluster/brick2/gv2/.shard/766dbebd-336e-4925-89c6-5a429fa9607c.15 -> /gluster/brick2/gv2/.glusterfs/22/e4/22e4eff4-2c0c-4f99-9672-052fbb1f431efailed  [File exists]

[2018-10-23 10:49:26.748020] E [MSGID: 113020] [posix.c:1485:posix_mknod] 0-gv2_vmpool-posix: setting gfid on /gluster/brick2/gv2/.shard/766dbebd-336e-4925-89c6-5a429fa9607c.15 failed

[2018-10-23 10:49:26.747989] W [MSGID: 113096] [posix-handle.c:770:posix_handle_hard] 0-gv2_vmpool-posix: link /gluster/brick2/gv2/.shard/766dbebd-336e-4925-89c6-5a429fa9607c.15 -> /gluster/brick2/gv2/.glusterfs/22/e4/22e4eff4-2c0c-4f99-9672-052fbb1f431efailed  [File exists]

[2018-10-23 10:50:48.075821] W [MSGID: 113096] [posix-handle.c:770:posix_handle_hard] 0-gv2_vmpool-posix: link /gluster/brick2/gv2/.shard/52185a97-ae8d-4925-8be6-b1afa90b5116.5 -> /gluster/brick2/gv2/.glusterfs/d7/fb/d7fb430a-e6c6-4bbb-b3b9-5f0691ad68bafailed  [File exists]

[2018-10-23 10:50:48.075866] E [MSGID: 113020] [posix.c:1485:posix_mknod] 0-gv2_vmpool-posix: setting gfid on /gluster/brick2/gv2/.shard/52185a97-ae8d-4925-8be6-b1afa90b5116.5 failed

[2018-10-23 10:51:00.885479] W [MSGID: 113096] [posix-handle.c:770:posix_handle_hard] 0-gv2_vmpool-posix: link /gluster/brick2/gv2/.shard/52185a97-ae8d-4925-8be6-b1afa90b5116.12 -> /gluster/brick2/gv2/.glusterfs/91/ed/91ede536-e9e7-4371-8c4a-08b41f9a5e15failed  [File exists]

[2018-10-23 10:51:00.885491] E [MSGID: 113020] [posix.c:1485:posix_mknod] 0-gv2_vmpool-posix: setting gfid on /gluster/brick2/gv2/.shard/52185a97-ae8d-4925-8be6-b1afa90b5116.12 failed

[2018-10-23 10:51:00.885480] W [MSGID: 113096] [posix-handle.c:770:posix_handle_hard] 0-gv2_vmpool-posix: link /gluster/brick2/gv2/.shard/52185a97-ae8d-4925-8be6-b1afa90b5116.12 -> /gluster/brick2/gv2/.glusterfs/91/ed/91ede536-e9e7-4371-8c4a-08b41f9a5e15failed  [File exists]

 

GLuster volumes status:

Status of volume: gv0_he

Gluster process                             TCP Port  RDMA Port  Online  Pid

------------------------------------------------------------------------------

Brick aws-gfs-01.awesome.lan:/gluster/brick

1/gv0                                       49152     0          Y       20938

Brick aws-gfs-02.awesome.lan:/gluster/brick

2/gv0                                       49152     0          Y       30787

Brick aws-gfs-03.awesome.lan:/gluster/brick

3/gv0                                       49152     0          Y       24685

Self-heal Daemon on localhost               N/A       N/A        Y       25808

Self-heal Daemon on aws-gfs-04.awesome.lan  N/A       N/A        Y       27130

Self-heal Daemon on aws-gfs-02.awesome.lan  N/A       N/A        Y       2672

Self-heal Daemon on aws-gfs-03.awesome.lan  N/A       N/A        Y       29368

 

Task Status of Volume gv0_he

------------------------------------------------------------------------------

There are no active volume tasks

 

Status of volume: gv1_vmpool

Gluster process                             TCP Port  RDMA Port  Online  Pid

------------------------------------------------------------------------------

Brick aws-gfs-01.awesome.lan:/gluster/brick

1/gv1                                       49153     0          Y       2066

Brick aws-gfs-02.awesome.lan:/gluster/brick

2/gv1                                       49153     0          Y       1933

Brick aws-gfs-03.awesome.lan:/gluster/brick

3/gv1                                       49153     0          Y       2027

Brick aws-gfs-04.awesome.lan:/gluster/brick

4/gv1                                       49152     0          Y       1870

 

Task Status of Volume gv1_vmpool

------------------------------------------------------------------------------

There are no active volume tasks

 

Status of volume: gv2_vmpool

Gluster process                             TCP Port  RDMA Port  Online  Pid

------------------------------------------------------------------------------

Brick aws-gfs-01.awesome.lan:/gluster/brick

1/gv2                                       49154     0          Y       25787

Brick aws-gfs-02.awesome.lan:/gluster/brick

2/gv2                                       49154     0          Y       2651

Brick aws-gfs-03.awesome.lan:/gluster/brick

3/gv2                                       49154     0          Y       29345

Brick aws-gfs-04.awesome.lan:/gluster/brick

4/gv2                                       49153     0          Y       27109

Self-heal Daemon on localhost               N/A       N/A        Y       25808

Self-heal Daemon on aws-gfs-04.awesome.lan  N/A       N/A        Y       27130

Self-heal Daemon on aws-gfs-02.awesome.lan  N/A       N/A        Y       2672

Self-heal Daemon on aws-gfs-03.awesome.lan  N/A       N/A        Y       29368

 

Task Status of Volume gv2_vmpool

------------------------------------------------------------------------------

There are no active volume tasks

 

Gluster volumes info:

[root@aws-gfs-01 ~]# gluster volume info

 

Volume Name: gv0_he

Type: Replicate

Volume ID: 04caec77-3595-48b0-9211-957bb8e9c47f

Status: Started

Snapshot Count: 0

Number of Bricks: 1 x 3 = 3

Transport-type: tcp

Bricks:

Brick1: aws-gfs-01.awesome.lan:/gluster/brick1/gv0

Brick2: aws-gfs-02.awesome.lan:/gluster/brick2/gv0

Brick3: aws-gfs-03.awesome.lan:/gluster/brick3/gv0

Options Reconfigured:

performance.client-io-threads: off

nfs.disable: on

transport.address-family: inet

cluster.quorum-type: auto

network.ping-timeout: 10

auth.allow: *

performance.quick-read: off

performance.read-ahead: off

performance.io-cache: off

performance.low-prio-threads: 32

network.remote-dio: enable

cluster.eager-lock: enable

cluster.server-quorum-type: server

cluster.data-self-heal-algorithm: full

cluster.locking-scheme: granular

cluster.shd-max-threads: 8

cluster.shd-wait-qlength: 10000

features.shard: on

user.cifs: off

storage.owner-uid: 36

storage.owner-gid: 36

server.allow-insecure: on

 

Volume Name: gv1_vmpool

Type: Distribute

Volume ID: 823428e1-41ef-4c91-80ed-1ca8cd41d9e9

Status: Started

Snapshot Count: 0

Number of Bricks: 4

Transport-type: tcp

Bricks:

Brick1: aws-gfs-01.awesome.lan:/gluster/brick1/gv1

Brick2: aws-gfs-02.awesome.lan:/gluster/brick2/gv1

Brick3: aws-gfs-03.awesome.lan:/gluster/brick3/gv1

Brick4: aws-gfs-04.awesome.lan:/gluster/brick4/gv1

Options Reconfigured:

performance.io-thread-count: 32

nfs.disable: on

transport.address-family: inet

network.ping-timeout: 10

auth.allow: *

storage.owner-uid: 36

storage.owner-gid: 36

server.allow-insecure: on

 

Volume Name: gv2_vmpool

Type: Distributed-Replicate

Volume ID: d62f9b07-a16f-4d7e-ac88-d4c0e29921e8

Status: Started

Snapshot Count: 0

Number of Bricks: 2 x 2 = 4

Transport-type: tcp

Bricks:

Brick1: aws-gfs-01.awesome.lan:/gluster/brick1/gv2

Brick2: aws-gfs-02.awesome.lan:/gluster/brick2/gv2

Brick3: aws-gfs-03.awesome.lan:/gluster/brick3/gv2

Brick4: aws-gfs-04.awesome.lan:/gluster/brick4/gv2

Options Reconfigured:

performance.cache-size: 128

performance.io-thread-count: 32

server.allow-insecure: on

storage.owner-gid: 36

storage.owner-uid: 36

user.cifs: off

features.shard: on

cluster.shd-wait-qlength: 10000

cluster.shd-max-threads: 8

cluster.locking-scheme: granular

cluster.data-self-heal-algorithm: full

cluster.server-quorum-type: server

cluster.quorum-type: auto

cluster.eager-lock: enable

network.remote-dio: enable

performance.low-prio-threads: 32

performance.io-cache: off

performance.read-ahead: off

performance.quick-read: off

auth.allow: *

network.ping-timeout: 10

transport.address-family: inet

nfs.disable: on

performance.client-io-threads: off

 

 

Can you please guide me or advise me how and what to look fo?

 

Thank you.

 

 

— — —
Met vriendelijke groet / Kind regards,

Marko Vrgotic

Sr.  System Engineer
m.vrgotic@xxxxxxxxxxxxxxx
tel. +31 (0)35 677 4131

ActiveVideo BV

Mediacentrum 3741

Joop van den Endeplein 1

1217 WJ Hilversum

 

 

 

_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux