Hi Mauro,
Yes, a rebalance consists of 2 operations for every directory:
- Fix the layout for the new volume config (newly added or removed bricks)
- Migrate files to their new hashed subvols based on the new layout
Are you running a rebalance because you added new bricks to the volume ? As per an earlier email you have already run a fix-layout.
On s04, please check the rebalance log file to see why the rebalance failed.
Regards,
Nithya
On 8 October 2018 at 13:22, Mauro Tridici <mauro.tridici@xxxxxxx> wrote:
Hi All,for your information, this is the current rebalance status:[root@s01 ~]# gluster volume rebalance tier2 statusNode Rebalanced-files size scanned failures skipped status run time in h:m:s--------- ----------- ----------- ----------- ----------- ----------- ------------ --------------localhost 551922 20.3TB 2349397 0 61849 in progress 55:25:38s02-stg 287631 13.2TB 959954 0 30262 in progress 55:25:39s03-stg 288523 12.7TB 973111 0 30220 in progress 55:25:39s04-stg 0 0Bytes 0 0 0 failed 0:00:37s05-stg 0 0Bytes 0 0 0 completed 48:33:03s06-stg 0 0Bytes 0 0 0 completed 48:33:02Estimated time left for rebalance to complete : 1023:49:56volume rebalance: tier2: successRebalance is migrating files on s05, s06 servers and on s04 too (although it is marked as failed).s05 and s06 tasks are completed.Questions:1) it seems that rebalance is moving files, but it is fixing the layout also, is it normal?2) when rebalance will be completed, what we need to do before return the gluster storage to the users? We have to launch rebalance again in order to involve s04 server too or a fix-layout to eventually fix some error on s04?Thank you very much,MauroIl giorno 07 ott 2018, alle ore 10:29, Mauro Tridici <mauro.tridici@xxxxxxx> ha scritto:<tier2-rebalance.log.gz>Hi All,some important updates about the issue mentioned below.After rebalance failed on all the servers, I decided to:- stop gluster volume- reboot the servers- start gluster volume- change some gluster volume options- start the rebalance againThe options that I changed are listed below after reading some threads on gluster users mailing list:BEFORE CHANGE:gluster volume set tier2 network.ping-timeout 02gluster volume set all cluster.brick-multiplex ongluster volume set tier2 cluster.server-quorum-ratio 51%gluster volume set tier2 cluster.server-quorum-type servergluster volume set tier2 cluster.quorum-type autoAFTER CHANGE:gluster volume set tier2 network.ping-timeout 42gluster volume set all cluster.brick-multiplex offgluster volume set tier2 cluster.server-quorum-ratio nonegluster volume set tier2 cluster.server-quorum-type nonegluster volume set tier2 cluster.quorum-type noneThe result was that rebalance starts moving data from s01, s02 ed s03 servers to s05 and s06 servers (the new added ones), but it failed on s04 server after 37 seconds.The rebalance is still running and moving data as you can see from the output:[root@s01 ~]# gluster volume rebalance tier2 statusNode Rebalanced-files size scanned failures skipped status run time in h:m:s--------- ----------- ----------- ----------- ----------- ----------- ------------ --------------localhost 286680 12.6TB 1217960 0 43343 in progress 32:10:24s02-stg 126291 12.4TB 413077 0 21932 in progress 32:10:25s03-stg 126516 11.9TB 433014 0 21870 in progress 32:10:25s04-stg 0 0Bytes 0 0 0 failed 0:00:37s05-stg 0 0Bytes 0 0 0 in progress 32:10:25s06-stg 0 0Bytes 0 0 0 in progress 32:10:25Estimated time left for rebalance to complete : 624:47:48volume rebalance: tier2: successWhen rebalance will be completed, we are planning to re-launch it to try to involve s04 server also.Do you have some idea about what happened in my previous message and why, now, rebalance it’s running although it’s not involve s04 server?In attachment the complete tier2-rebalance.log file related to s04 server.Thank you very much for your help,MauroIl giorno 06 ott 2018, alle ore 02:01, Mauro Tridici <mauro.tridici@xxxxxxx> ha scritto:<rebalance_log.txt>Hi All,since we need to restore gluster storage as soon as possible, we decided to ignore the few files that could be lost and to go ahead.So we cleaned all bricks content of servers s04, s05 and s06.As planned some days ago, we executed the following commands:gluster peer detach s04gluster peer detach s05gluster peer detach s06gluster peer probe s04gluster peer probe s05gluster peer probe s06gluster volume add-brick tier2 s04-stg:/gluster/mnt1/brick s05-stg:/gluster/mnt1/brick s06-stg:/gluster/mnt1/brick s04-stg:/gluster/mnt2/brick s05-stg:/gluster/mnt2/brick s06-stg:/gluster/mnt2/brick s04-stg:/gluster/mnt3/brick s05-stg:/gluster/mnt3/brick s06-stg:/gluster/mnt3/brick s04-stg:/gluster/mnt4/brick s05-stg:/gluster/mnt4/brick s06-stg:/gluster/mnt4/brick s04-stg:/gluster/mnt5/brick s05-stg:/gluster/mnt5/brick s06-stg:/gluster/mnt5/brick s04-stg:/gluster/mnt6/brick s05-stg:/gluster/mnt6/brick s06-stg:/gluster/mnt6/brick s04-stg:/gluster/mnt7/brick s05-stg:/gluster/mnt7/brick s06-stg:/gluster/mnt7/brick s04-stg:/gluster/mnt8/brick s05-stg:/gluster/mnt8/brick s06-stg:/gluster/mnt8/brick s04-stg:/gluster/mnt9/brick s05-stg:/gluster/mnt9/brick s06-stg:/gluster/mnt9/brick s04-stg:/gluster/mnt10/brick s05-stg:/gluster/mnt10/brick s06-stg:/gluster/mnt10/brick s04-stg:/gluster/mnt11/brick s05-stg:/gluster/mnt11/brick s06-stg:/gluster/mnt11/brick s04-stg:/gluster/mnt12/brick s05-stg:/gluster/mnt12/brick s06-stg:/gluster/mnt12/brick forcegluster volume rebalance tier2 fix-layout startEverything seem to be fine and fix-layout ended.[root@s01 ~]# gluster volume rebalance tier2 statusNode status run time in h:m:s--------- ----------- ------------localhost fix-layout completed 12:11:6s02-stg fix-layout completed 12:11:18s03-stg fix-layout completed 12:11:12s04-stg fix-layout completed 12:11:20s05-stg fix-layout completed 12:11:14s06-stg fix-layout completed 12:10:47volume rebalance: tier2: success[root@s01 ~]# gluster volume infoVolume Name: tier2Type: Distributed-DisperseVolume ID: a28d88c5-3295-4e35-98d4-210b3af9358c Status: StartedSnapshot Count: 0Number of Bricks: 12 x (4 + 2) = 72Transport-type: tcpBricks:Brick1: s01-stg:/gluster/mnt1/brickBrick2: s02-stg:/gluster/mnt1/brickBrick3: s03-stg:/gluster/mnt1/brickBrick4: s01-stg:/gluster/mnt2/brickBrick5: s02-stg:/gluster/mnt2/brickBrick6: s03-stg:/gluster/mnt2/brickBrick7: s01-stg:/gluster/mnt3/brickBrick8: s02-stg:/gluster/mnt3/brickBrick9: s03-stg:/gluster/mnt3/brickBrick10: s01-stg:/gluster/mnt4/brickBrick11: s02-stg:/gluster/mnt4/brickBrick12: s03-stg:/gluster/mnt4/brickBrick13: s01-stg:/gluster/mnt5/brickBrick14: s02-stg:/gluster/mnt5/brickBrick15: s03-stg:/gluster/mnt5/brickBrick16: s01-stg:/gluster/mnt6/brickBrick17: s02-stg:/gluster/mnt6/brickBrick18: s03-stg:/gluster/mnt6/brickBrick19: s01-stg:/gluster/mnt7/brickBrick20: s02-stg:/gluster/mnt7/brickBrick21: s03-stg:/gluster/mnt7/brickBrick22: s01-stg:/gluster/mnt8/brickBrick23: s02-stg:/gluster/mnt8/brickBrick24: s03-stg:/gluster/mnt8/brickBrick25: s01-stg:/gluster/mnt9/brickBrick26: s02-stg:/gluster/mnt9/brickBrick27: s03-stg:/gluster/mnt9/brickBrick28: s01-stg:/gluster/mnt10/brickBrick29: s02-stg:/gluster/mnt10/brickBrick30: s03-stg:/gluster/mnt10/brickBrick31: s01-stg:/gluster/mnt11/brickBrick32: s02-stg:/gluster/mnt11/brickBrick33: s03-stg:/gluster/mnt11/brickBrick34: s01-stg:/gluster/mnt12/brickBrick35: s02-stg:/gluster/mnt12/brickBrick36: s03-stg:/gluster/mnt12/brickBrick37: s04-stg:/gluster/mnt1/brickBrick38: s05-stg:/gluster/mnt1/brickBrick39: s06-stg:/gluster/mnt1/brickBrick40: s04-stg:/gluster/mnt2/brickBrick41: s05-stg:/gluster/mnt2/brickBrick42: s06-stg:/gluster/mnt2/brickBrick43: s04-stg:/gluster/mnt3/brickBrick44: s05-stg:/gluster/mnt3/brickBrick45: s06-stg:/gluster/mnt3/brickBrick46: s04-stg:/gluster/mnt4/brickBrick47: s05-stg:/gluster/mnt4/brickBrick48: s06-stg:/gluster/mnt4/brickBrick49: s04-stg:/gluster/mnt5/brickBrick50: s05-stg:/gluster/mnt5/brickBrick51: s06-stg:/gluster/mnt5/brickBrick52: s04-stg:/gluster/mnt6/brickBrick53: s05-stg:/gluster/mnt6/brickBrick54: s06-stg:/gluster/mnt6/brickBrick55: s04-stg:/gluster/mnt7/brickBrick56: s05-stg:/gluster/mnt7/brickBrick57: s06-stg:/gluster/mnt7/brickBrick58: s04-stg:/gluster/mnt8/brickBrick59: s05-stg:/gluster/mnt8/brickBrick60: s06-stg:/gluster/mnt8/brickBrick61: s04-stg:/gluster/mnt9/brickBrick62: s05-stg:/gluster/mnt9/brickBrick63: s06-stg:/gluster/mnt9/brickBrick64: s04-stg:/gluster/mnt10/brickBrick65: s05-stg:/gluster/mnt10/brickBrick66: s06-stg:/gluster/mnt10/brickBrick67: s04-stg:/gluster/mnt11/brickBrick68: s05-stg:/gluster/mnt11/brickBrick69: s06-stg:/gluster/mnt11/brickBrick70: s04-stg:/gluster/mnt12/brickBrick71: s05-stg:/gluster/mnt12/brickBrick72: s06-stg:/gluster/mnt12/brickOptions Reconfigured:network.ping-timeout: 42features.scrub: Activefeatures.bitrot: onfeatures.inode-quota: onfeatures.quota: onperformance.client-io-threads: oncluster.min-free-disk: 10cluster.quorum-type: nonetransport.address-family: inetnfs.disable: onserver.event-threads: 4client.event-threads: 4cluster.lookup-optimize: onperformance.readdir-ahead: onperformance.parallel-readdir: offcluster.readdir-optimize: onfeatures.cache-invalidation: onfeatures.cache-invalidation-timeout: 600 performance.stat-prefetch: onperformance.cache-invalidation: on performance.md-cache-timeout: 600network.inode-lru-limit: 50000performance.io-cache: offdisperse.cpu-extensions: autoperformance.io-thread-count: 16features.quota-deem-statfs: onfeatures.default-soft-limit: 90cluster.server-quorum-type: nonediagnostics.latency-measurement: on diagnostics.count-fop-hits: oncluster.brick-multiplex: offcluster.server-quorum-ratio: 51%The last step should be the data rebalance between the servers, but rebalance failed soon with a lot of errors like the following ones:[2018-10-05 23:48:38.644978] I [MSGID: 114035] [client-handshake.c:202:client_set_lk_version_cbk] 0-tier2-client-70: Server lk version = 1 [2018-10-05 23:48:44.735323] I [dht-rebalance.c:4512:gf_defrag_start_crawl] 0-tier2-dht: gf_defrag_start_crawl using commit hash 3720331860 [2018-10-05 23:48:44.736205] W [MSGID: 122040] [ec-common.c:1097:ec_prepare_update_cbk] 0-tier2-disperse-7: Failed to get size and version [Input/output error] [2018-10-05 23:48:44.736266] E [MSGID: 122034] [ec-common.c:613:ec_child_select] 0-tier2-disperse-7: Insufficient available children for this request (have 0, need 4) [2018-10-05 23:48:44.736282] E [MSGID: 122037] [ec-common.c:2040:ec_update_size_version_done] 0-tier2-disperse-7: Failed to update version and size [Input/output error] [2018-10-05 23:48:44.736377] W [MSGID: 122040] [ec-common.c:1097:ec_prepare_update_cbk] 0-tier2-disperse-8: Failed to get size and version [Input/output error] [2018-10-05 23:48:44.736436] E [MSGID: 122034] [ec-common.c:613:ec_child_select] 0-tier2-disperse-8: Insufficient available children for this request (have 0, need 4) [2018-10-05 23:48:44.736459] E [MSGID: 122037] [ec-common.c:2040:ec_update_size_version_done] 0-tier2-disperse-8: Failed to update version and size [Input/output error] [2018-10-05 23:48:44.736460] W [MSGID: 122040] [ec-common.c:1097:ec_prepare_update_cbk] 0-tier2-disperse-10: Failed to get size and version [Input/output error] [2018-10-05 23:48:44.736537] W [MSGID: 122040] [ec-common.c:1097:ec_prepare_update_cbk] 0-tier2-disperse-9: Failed to get size and version [Input/output error] [2018-10-05 23:48:44.736571] E [MSGID: 122034] [ec-common.c:613:ec_child_select] 0-tier2-disperse-10: Insufficient available children for this request (have 0, need 4) [2018-10-05 23:48:44.736574] E [MSGID: 122034] [ec-common.c:613:ec_child_select] 0-tier2-disperse-9: Insufficient available children for this request (have 0, need 4) [2018-10-05 23:48:44.736604] E [MSGID: 122037] [ec-common.c:2040:ec_update_size_version_done] 0-tier2-disperse-9: Failed to update version and size [Input/output error] [2018-10-05 23:48:44.736604] E [MSGID: 122037] [ec-common.c:2040:ec_update_size_version_done] 0-tier2-disperse-10: Failed to update version and size [Input/output error] [2018-10-05 23:48:44.736827] W [MSGID: 122040] [ec-common.c:1097:ec_prepare_update_cbk] 0-tier2-disperse-11: Failed to get size and version [Input/output error] [2018-10-05 23:48:44.736887] E [MSGID: 122034] [ec-common.c:613:ec_child_select] 0-tier2-disperse-11: Insufficient available children for this request (have 0, need 4) [2018-10-05 23:48:44.736904] E [MSGID: 122037] [ec-common.c:2040:ec_update_size_version_done] 0-tier2-disperse-11: Failed to update version and size [Input/output error] [2018-10-05 23:48:44.740337] W [MSGID: 122040] [ec-common.c:1097:ec_prepare_update_cbk] 0-tier2-disperse-6: Failed to get size and version [Input/output error] [2018-10-05 23:48:44.740381] E [MSGID: 122034] [ec-common.c:613:ec_child_select] 0-tier2-disperse-6: Insufficient available children for this request (have 0, need 4) [2018-10-05 23:48:44.740394] E [MSGID: 122037] [ec-common.c:2040:ec_update_size_version_done] 0-tier2-disperse-6: Failed to update version and size [Input/output error] [2018-10-05 23:48:50.066103] I [MSGID: 109081] [dht-common.c:4379:dht_setxattr] 0-tier2-dht: fixing the layout of / In attachment you can find the first logs captured during the rebalance execution.In your opinion, is there a way to restore the gluster storage or all the data have been lost?Thank you in advance,MauroIl giorno 04 ott 2018, alle ore 15:31, Mauro Tridici <mauro.tridici@xxxxxxx> ha scritto:Hi Nithya,thank you very much.This is the current “gluster volume info” output after removing bricks (and after peer detach command).[root@s01 ~]# gluster volume infoVolume Name: tier2Type: Distributed-DisperseVolume ID: a28d88c5-3295-4e35-98d4-210b3af9358c Status: StartedSnapshot Count: 0Number of Bricks: 6 x (4 + 2) = 36Transport-type: tcpBricks:Brick1: s01-stg:/gluster/mnt1/brickBrick2: s02-stg:/gluster/mnt1/brickBrick3: s03-stg:/gluster/mnt1/brickBrick4: s01-stg:/gluster/mnt2/brickBrick5: s02-stg:/gluster/mnt2/brickBrick6: s03-stg:/gluster/mnt2/brickBrick7: s01-stg:/gluster/mnt3/brickBrick8: s02-stg:/gluster/mnt3/brickBrick9: s03-stg:/gluster/mnt3/brickBrick10: s01-stg:/gluster/mnt4/brickBrick11: s02-stg:/gluster/mnt4/brickBrick12: s03-stg:/gluster/mnt4/brickBrick13: s01-stg:/gluster/mnt5/brickBrick14: s02-stg:/gluster/mnt5/brickBrick15: s03-stg:/gluster/mnt5/brickBrick16: s01-stg:/gluster/mnt6/brickBrick17: s02-stg:/gluster/mnt6/brickBrick18: s03-stg:/gluster/mnt6/brickBrick19: s01-stg:/gluster/mnt7/brickBrick20: s02-stg:/gluster/mnt7/brickBrick21: s03-stg:/gluster/mnt7/brickBrick22: s01-stg:/gluster/mnt8/brickBrick23: s02-stg:/gluster/mnt8/brickBrick24: s03-stg:/gluster/mnt8/brickBrick25: s01-stg:/gluster/mnt9/brickBrick26: s02-stg:/gluster/mnt9/brickBrick27: s03-stg:/gluster/mnt9/brickBrick28: s01-stg:/gluster/mnt10/brickBrick29: s02-stg:/gluster/mnt10/brickBrick30: s03-stg:/gluster/mnt10/brickBrick31: s01-stg:/gluster/mnt11/brickBrick32: s02-stg:/gluster/mnt11/brickBrick33: s03-stg:/gluster/mnt11/brickBrick34: s01-stg:/gluster/mnt12/brickBrick35: s02-stg:/gluster/mnt12/brickBrick36: s03-stg:/gluster/mnt12/brickOptions Reconfigured:network.ping-timeout: 0features.scrub: Activefeatures.bitrot: onfeatures.inode-quota: onfeatures.quota: onperformance.client-io-threads: oncluster.min-free-disk: 10cluster.quorum-type: autotransport.address-family: inetnfs.disable: onserver.event-threads: 4client.event-threads: 4cluster.lookup-optimize: onperformance.readdir-ahead: onperformance.parallel-readdir: offcluster.readdir-optimize: onfeatures.cache-invalidation: onfeatures.cache-invalidation-timeout: 600 performance.stat-prefetch: onperformance.cache-invalidation: on performance.md-cache-timeout: 600network.inode-lru-limit: 50000performance.io-cache: offdisperse.cpu-extensions: autoperformance.io-thread-count: 16features.quota-deem-statfs: onfeatures.default-soft-limit: 90cluster.server-quorum-type: serverdiagnostics.latency-measurement: on diagnostics.count-fop-hits: oncluster.brick-multiplex: oncluster.server-quorum-ratio: 51%Regards,MauroIl giorno 04 ott 2018, alle ore 15:22, Nithya Balachandran <nbalacha@xxxxxxxxxx> ha scritto:Hi Mauro,The files on s04 and s05 can be deleted safely as long as those bricks have been removed from the volume and their brick processes are not running..glusterfs/indices/xattrop/xattrop-* are links to files that need to be healed. .glusterfs/quarantine/stub- 00000000-0000-0000-0000- 000000000008 links to files that bitrot (if enabled)says are corrupted. (none in this case)
I will get back to you on s06. Can you please provide the output of gluster volume info again?
Regards,NithyaOn 4 October 2018 at 13:47, Mauro Tridici <mauro.tridici@xxxxxxx> wrote:Dear Ashish, Dear Nithya,I’m writing this message only to summarize and simplify the information about the "not migrated” files left on removed bricks on server s04, s05 and s06.In attachment, you can find 3 files (1 file for each server) containing the “not migrated” files lists and related brick number.In particular:
- s04 and s05 bricks contain only not migrated files in hidden directories “/gluster/mnt#/brick/.glusterf
s" (I could delete them, doesn’t it?) - s06 bricks contain
- not migrated files in hidden directories “/gluster/mnt#/bri
ck/.glusterfs”; - not migrated files with size equal to 0;
- not migrated files with size greater than 0.
I think it was necessary to collect and summarize information to simplify your analysis.Thank you very much,Mauro
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx https://lists.gluster.org/mailman/listinfo/gluster-users