healing of a volume of type disperse

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all, I'm pretty new to glusterfs, I managed to setup a dispersed
volume (4+2) following the manual using the release 6.1 from centos'
repository.. Is it a stable release?
Then I forced the volume stop when the application were writing on the
mount point.. getting a wanted splitting (inconsistent) state, I'm
wondering what are the best practice to solve this kinds of
situation...I just found a detailed explanation about how to solve
splitting-head states  of replicated volume at
https://docs.gluster.org/en/latest/Troubleshooting/resolving-splitbrain/
but it seems to be not applicable to the disperse type.
Do I miss to read some important piece of documentation? Please point
me to some reference.
Here's some command detail:

#gluster volume info elastic-volume

Volume Name: elastic-volume
Type: Disperse
Volume ID: 96773fef-c443-465b-a518-6630bcf83397
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (4 + 2) = 6
Transport-type: tcp
Bricks:
Brick1: dev-netflow01.fineco.it:/data/gfs/lv_elastic/brick1/brick
Brick2: dev-netflow02.fineco.it:/data/gfs/lv_elastic/brick1/brick
Brick3: dev-netflow03.fineco.it:/data/gfs/lv_elastic/brick1/brick
Brick4: dev-netflow04.fineco.it:/data/gfs/lv_elastic/brick1/brick
Brick5: dev-netflow05.fineco.it:/data/gfs/lv_elastic/brick1/brick
Brick6: dev-netflow06.fineco.it:/data/gfs/lv_elastic/brick1/brick
Options Reconfigured:
performance.io-cache: off
performance.io-thread-count: 64
performance.write-behind-window-size: 100MB
performance.cache-size: 1GB
nfs.disable: on
transport.address-family: inet


# gluster volume heal elastic-volume info
Brick dev01:/data/gfs/lv_elastic/brick1/brick
<gfid:7d7d7292-76eb-430a-ac10-b4f5e9311a17>
/data/logs/20190606/ns-coreiol-iol-app-listini.2019060615.log
/data/logs/20190606/ns-coreiol-iol-app-fns.2019060615.log
/data/logs/20190606/ns-coreiol-iol-app-news.2019060615.log
/data/logs/20190606/ns-coreiol-iol-app-trkd.2019060615.log
<gfid:5c577478-9a2c-4d99-9189-36e9afed1039>
<gfid:813ccd43-1578-4275-a342-416a658cd714>
<gfid:60c74f7e-bed3-44a1-9129-99541a83e71b>
<gfid:9417e4db-5c68-4812-9ab1-77b4f5ad7174>
/data/logs/20190606/ns-coreiol-iol-lib-managers.2019060615.log
/data/logs/20190606/ns-coreiol-iol-lib-httpwrapper.2019060615.log
/data/logs/20190606/ns-coreiol-iol-app-chart.2019060615.log
Status: Connected
Number of entries: 12

Brick dev02:/data/gfs/lv_elastic/brick1/brick
/data/logs/20190606/ns-coreiol-iol-lib-managers.2019060615.log
/data/logs/20190606/ns-coreiol-iol-app-news.2019060615.log
/data/logs/20190606/ns-coreiol-iol-app-chart.2019060615.log
<gfid:5c577478-9a2c-4d99-9189-36e9afed1039>
<gfid:813ccd43-1578-4275-a342-416a658cd714>
<gfid:60c74f7e-bed3-44a1-9129-99541a83e71b>
<gfid:9417e4db-5c68-4812-9ab1-77b4f5ad7174>
<gfid:7d7d7292-76eb-430a-ac10-b4f5e9311a17>
/data/logs/20190606/ns-coreiol-iol-app-listini.2019060615.log
/data/logs/20190606/ns-coreiol-iol-app-fns.2019060615.log
/data/logs/20190606/ns-coreiol-iol-app-trkd.2019060615.log
/data/logs/20190606/ns-coreiol-iol-lib-httpwrapper.2019060615.log
Status: Connected
Number of entries: 12

Brick dev03:/data/gfs/lv_elastic/brick1/brick
/data/logs/20190606/ns-coreiol-iol-app-listini.2019060615.log
/data/logs/20190606/ns-coreiol-iol-app-fns.2019060615.log
/data/logs/20190606/ns-coreiol-iol-app-news.2019060615.log
/data/logs/20190606/ns-coreiol-iol-app-trkd.2019060615.log
<gfid:5c577478-9a2c-4d99-9189-36e9afed1039>
<gfid:813ccd43-1578-4275-a342-416a658cd714>
<gfid:60c74f7e-bed3-44a1-9129-99541a83e71b>
<gfid:9417e4db-5c68-4812-9ab1-77b4f5ad7174>
<gfid:7d7d7292-76eb-430a-ac10-b4f5e9311a17>
/data/logs/20190606/ns-coreiol-iol-lib-managers.2019060615.log
/data/logs/20190606/ns-coreiol-iol-lib-httpwrapper.2019060615.log
/data/logs/20190606/ns-coreiol-iol-app-chart.2019060615.log
Status: Connected
Number of entries: 12

Brick dev04:/data/gfs/lv_elastic/brick1/brick
<gfid:7d7d7292-76eb-430a-ac10-b4f5e9311a17>
/data/logs/20190606/ns-coreiol-iol-app-news.2019060615.log
/data/logs/20190606/ns-coreiol-iol-app-trkd.2019060615.log
/data/logs/20190606/ns-coreiol-iol-lib-managers.2019060615.log
/data/logs/20190606/ns-coreiol-iol-lib-httpwrapper.2019060615.log
<gfid:5c577478-9a2c-4d99-9189-36e9afed1039>
<gfid:813ccd43-1578-4275-a342-416a658cd714>
<gfid:60c74f7e-bed3-44a1-9129-99541a83e71b>
<gfid:9417e4db-5c68-4812-9ab1-77b4f5ad7174>
/data/logs/20190606/ns-coreiol-iol-app-listini.2019060615.log
/data/logs/20190606/ns-coreiol-iol-app-fns.2019060615.log
/data/logs/20190606/ns-coreiol-iol-app-chart.2019060615.log
Status: Connected
Number of entries: 12

Brick dev05:/data/gfs/lv_elastic/brick1/brick
/data/logs/20190606/ns-coreiol-iol-app-news.2019060615.log
/data/logs/20190606/ns-coreiol-iol-app-trkd.2019060615.log
<gfid:5c577478-9a2c-4d99-9189-36e9afed1039>
<gfid:813ccd43-1578-4275-a342-416a658cd714>
<gfid:60c74f7e-bed3-44a1-9129-99541a83e71b>
<gfid:9417e4db-5c68-4812-9ab1-77b4f5ad7174>
<gfid:7d7d7292-76eb-430a-ac10-b4f5e9311a17>
/data/logs/20190606/ns-coreiol-iol-app-listini.2019060615.log
/data/logs/20190606/ns-coreiol-iol-lib-managers.2019060615.log
/data/logs/20190606/ns-coreiol-iol-app-fns.2019060615.log
/data/logs/20190606/ns-coreiol-iol-lib-httpwrapper.2019060615.log
/data/logs/20190606/ns-coreiol-iol-app-chart.2019060615.log
Status: Connected
Number of entries: 12

Brick dev06:/data/gfs/lv_elastic/brick1/brick
/data/logs/20190606/ns-coreiol-iol-app-chart.2019060615.log
<gfid:5c577478-9a2c-4d99-9189-36e9afed1039>
<gfid:813ccd43-1578-4275-a342-416a658cd714>
<gfid:60c74f7e-bed3-44a1-9129-99541a83e71b>
<gfid:9417e4db-5c68-4812-9ab1-77b4f5ad7174>
<gfid:7d7d7292-76eb-430a-ac10-b4f5e9311a17>
/data/logs/20190606/ns-coreiol-iol-lib-managers.2019060615.log
/data/logs/20190606/ns-coreiol-iol-app-news.2019060615.log
/data/logs/20190606/ns-coreiol-iol-app-trkd.2019060615.log
/data/logs/20190606/ns-coreiol-iol-app-listini.2019060615.log
/data/logs/20190606/ns-coreiol-iol-app-fns.2019060615.log
/data/logs/20190606/ns-coreiol-iol-lib-httpwrapper.2019060615.log
Status: Connected
Number of entries: 12

# gluster volume heal elastic-volume info split-brain
Volume elastic-volume is not of type replicate

Any advice?

Best regards

Luca
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users



[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux