Re: Gluster-users Digest, Vol 168, Issue 13

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Yes you can directly update the volfile and restart the glusterd.
Please file a github issue if you are facing any issue further.



On Tue, Apr 19, 2022 at 5:32 PM <gluster-users-request@xxxxxxxxxxx> wrote:
Send Gluster-users mailing list submissions to
        gluster-users@xxxxxxxxxxx

To subscribe or unsubscribe via the World Wide Web, visit
        https://lists.gluster.org/mailman/listinfo/gluster-users
or, via email, send a message with subject or body 'help' to
        gluster-users-request@xxxxxxxxxxx

You can reach the person managing the list at
        gluster-users-owner@xxxxxxxxxxx

When replying, please edit your Subject line so it is more specific
than "Re: Contents of Gluster-users digest..."


Today's Topics:

   1. Mounted size reported incorrectly after replacing bricks
      (Patrick Dijkgraaf)


----------------------------------------------------------------------

Message: 1
Date: Tue, 19 Apr 2022 11:30:05 +0200
From: Patrick Dijkgraaf <bolderbasta@xxxxxxxxx>
To: gluster-users@xxxxxxxxxxx
Subject: Mounted size reported incorrectly after
        replacing       bricks
Message-ID: <da1e295b001824362e4dab0180ce41859f7c9bf3.camel@xxxxxxxxx>
Content-Type: text/plain; charset="utf-8"

Hi all, I hope this message finds you well.

I sent some messages earlier, but I found that they bounced a lot due
to DMARC/SPF, so I am sending this question again from another mail
account. Please accept my apologies for spamming.

I've been running a Gluster volume (32 bricks in distributed,
replicated mode) on my 2 home servers for about 1,5 years now. I'm
generally very happy with it!
It's running on Arch Linux and the current version of glusterfs is
10.1.

Because some disks were about to fail, I started replacing multiple
bricks. And taking advantage of this, I replaced them with a larger
disk (4TB -> 8TB). Healing took care of copying all data to the new
brick and it finished succesfully. However, I see an incorrect size on
the mounted Gluster volume.

Some things I have found:

 * shared-brick-count in /var/lib/glusterd/vols/data/* is higher than 1
   on some local bricks, even though they are actually on separate file
   systems
 * I have duplicate brick-fsid numbers in
   /var/lib/glusterd/vols/data/bricks/*, even though they are actually
   on separate file systems
 * I have restarted glusterd and still have the duplicate brick-fsid's

So I am wondering where the duplicate FSIDs come from, and how to
(forcefully?) resolve them. Can I safely alter them in
/var/lib/glusterd/vols/data/bricks/* and restart glusterd maybe?

I *may* at some point have accidentally replaced a brick to a wrong
location, being either the parent file system or another brick. But I
have corrected this by replacing it again to the correct location. Each
time I used the "gluster volume replace-brick" command.

I have attached what I believe would be all relevant information to
diagnose the issue.
Please let me know if I can provide more information to get this issue
resolved.

--
groet / cheers,
Patrick Dijkgraaf
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20220419/40df9490/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: Gluster issue.zip
Type: application/zip
Size: 211469 bytes
Desc: not available
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20220419/40df9490/attachment-0001.zip>

------------------------------

Subject: Digest Footer

_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users

------------------------------

End of Gluster-users Digest, Vol 168, Issue 13
**********************************************

________



Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux