GlusterFS healing questions

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

We ran a test on GlusterFS 3.12.1 with erasurecoded volumes 8+2 with 10 bricks (default config,tested with 100gb, 200gb, 400gb bricksizes,10gbit nics)

1.
Tests show that healing takes about double the time on healing 200gb vs 100, and abit under the double on 400gb vs 200gb bricksizes. Is this expected behaviour? In light of this would make 6,4 tb bricksizes use ~ 377 hours to heal.

100gb brick heal: 18 hours (8+2)
200gb brick heal: 37 hours (8+2) +205%
400gb brick heal: 59 hours (8+2) +159%

Each 100gb is filled with 80000 x 10mb files (200gb is 2x and 400gb is 4x)

2.
Are there any possibility to show the progress of a heal? As per now we run gluster volume heal volume info, but this exit's when a brick is done healing and when we run heal info again the command contiunes showing gfid's until the brick is done again. This gives quite a bad picture of the status of a heal.

3.
What kind of config tweaks is recommended for these kind of EC volumes?


$ gluster volume info
Volume Name: test-ec-100g
Type: Disperse
Volume ID: 0254281d-2f6e-4ac4-a773-2b8e0eb8ab27
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (8 + 2) = 10
Transport-type: tcp
Bricks:
Brick1: dn-304:/mnt/test-ec-100/brick
Brick2: dn-305:/mnt/test-ec-100/brick
Brick3: dn-306:/mnt/test-ec-100/brick
Brick4: dn-307:/mnt/test-ec-100/brick
Brick5: dn-308:/mnt/test-ec-100/brick
Brick6: dn-309:/mnt/test-ec-100/brick
Brick7: dn-310:/mnt/test-ec-100/brick
Brick8: dn-311:/mnt/test-ec-2/brick
Brick9: dn-312:/mnt/test-ec-100/brick
Brick10: dn-313:/mnt/test-ec-100/brick
Options Reconfigured:
nfs.disable: on
transport.address-family: inet
 
Volume Name: test-ec-200
Type: Disperse
Volume ID: 2ce23e32-7086-49c5-bf0c-7612fd7b3d5d
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (8 + 2) = 10
Transport-type: tcp
Bricks:
Brick1: dn-304:/mnt/test-ec-200/brick
Brick2: dn-305:/mnt/test-ec-200/brick
Brick3: dn-306:/mnt/test-ec-200/brick
Brick4: dn-307:/mnt/test-ec-200/brick
Brick5: dn-308:/mnt/test-ec-200/brick
Brick6: dn-309:/mnt/test-ec-200/brick
Brick7: dn-310:/mnt/test-ec-200/brick
Brick8: dn-311:/mnt/test-ec-200_2/brick
Brick9: dn-312:/mnt/test-ec-200/brick
Brick10: dn-313:/mnt/test-ec-200/brick
Options Reconfigured:
nfs.disable: on
transport.address-family: inet

Volume Name: test-ec-400
Type: Disperse
Volume ID: fe00713a-7099-404d-ba52-46c6b4b6ecc0
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (8 + 2) = 10
Transport-type: tcp
Bricks:
Brick1: dn-304:/mnt/test-ec-400/brick
Brick2: dn-305:/mnt/test-ec-400/brick
Brick3: dn-306:/mnt/test-ec-400/brick
Brick4: dn-307:/mnt/test-ec-400/brick
Brick5: dn-308:/mnt/test-ec-400/brick
Brick6: dn-309:/mnt/test-ec-400/brick
Brick7: dn-310:/mnt/test-ec-400/brick
Brick8: dn-311:/mnt/test-ec-400_2/brick
Brick9: dn-312:/mnt/test-ec-400/brick
Brick10: dn-313:/mnt/test-ec-400/brick
Options Reconfigured:
nfs.disable: on
transport.address-family: inet

--

Regards
Rolf Arne Larsen
Ops Engineer
rolf@xxxxxxxxxxxxxx
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux