Hi Susant,
and thanks for your fast reply and pointing me to that log. So i was able to find the problem: "dht-rebalance.c:1052:__dht_check_free_space] 0-vol4-dht: Could not find any subvol with space accomodating the file"
But Volume Detail and df -h show xTB of free Disk Space and also Free Inodes.
Options Reconfigured:
performance.client-io-threads: on
storage.reserve: 0
performance.parallel-readdir: off
performance.readdir-ahead: off
auth.allow: 192.168.0.*
nfs.disable: off
transport.address-family: inet
performance.client-io-threads: on
storage.reserve: 0
performance.parallel-readdir: off
performance.readdir-ahead: off
auth.allow: 192.168.0.*
nfs.disable: off
transport.address-family: inet
Ok since there is enough disk space on other Bricks and i actually didnt complete brick-remove, can i rerun brick-remove to rebalance last Files and Folders?
Thanks
Taste
Would it be possible for you to pass the rebalance log file on the node from which you want to remove the brick? (location : /var/log/glusterfs/<volume-name-rebalance.log>)+ the following information:1 - gluster volume info2 - gluster volume status2 - df -h output on all 3 nodesSusantOn Tue, Mar 12, 2019 at 3:08 PM Taste-Of-IT <kontakt@xxxxxxxxxxxxxx> wrote:Hi,
i have a 3 Node Distributed Gluster. I have one Volume over all 3 Nodes / Bricks. I want to remove one Brick and run gluster volume remove-brick <vol> <brickname> start. The Job completes and shows 11960 failures and only transfers 5TB out of 15TB Data. I have still files and folders on this volume on the brick to remove. I actually didnt run the final command with "commit". Both other Nodes have each over 6TB of free Space, so it can hold the remaininge Data from Brick3 theoretically.
Need help.
thx
Taste
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx https://lists.gluster.org/mailman/listinfo/gluster-users