On 16 October 2018 at 20:04, <jring@xxxxxxx> wrote:
Hi,
> > So we did a quick grep shared-brick-count /var/lib/glusterd/vols/data_vol1/* on all boxes and found that on 5 out of 6 boxes this was shared-brick-count=0 for all bricks on remote boxes and 1 for local bricks. Hi,
> >
> > Is this the expected result or should we have all 1 everywhere (as the quick fix script from the case sets it)?
>
> No , this is fine. The shared-brick-count only needs to be 1 for the local bricks. The value for the remote bricks can be 0.
>
> > Also on one box (the one where we created the volume from, btw) we have shared-brick-count=0 for all remote bricks and 10 for the local bricks.
>
> This is a problem. The shared-brick-count should be 1 for the local bricks here as well.
>
> > Is it possible that the bug from 3.4 still exists in 4.1.5 and should we try the filter script which sets shared-brick-count=1 for all bricks?
> >
>
> Can you try
> 1. restarting glusterd on all the nodes one after another (not at the same time)
> 2. Setting a volume option (say gluster volume set <volname> cluster.min-free-disk 11%)
>
> and see if it fixes the issue?
ok, this was a quick fix - volume size is correct again and the shared-brick-count is correct everywhere.
We'll duly note this in our wiki.
Thanks a lot!
If there were any directories created on the volume when the sizes were wrong, the layouts sets on them are probably incorrect. You might want to do a fix-layout on the volume.
Regards,
Nithya
Joachim
------------------------------------------------------------ ------------------------------ -------
FreeMail powered by mail.de - MEHR SICHERHEIT, SERIOSITÄT UND KOMFORT
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx https://lists.gluster.org/mailman/listinfo/gluster-users