I think there's something else going on here. Our gluster setup is 4x1 replicate, and there should be no such discrepancies and missing unsynced data. In my experience the replication part of gluster works very well and all files replicate correctly to all 4 servers.
Yet a single ls could run for 20 minutes and run into thousands of these log messages. I think this is worth investigating and fixing. Perhaps it's some super tiny metadata changes, or the files are there, and this metadata sync is taking up all this extra time and CPU? I'd love to help diagnose and fix this.
On Tue, May 19, 2020 at 4:09 AM Gionatan Danti <g.danti@xxxxxxxxxx> wrote:
Il 2020-05-19 13:07 Susant Palai ha scritto:
> This can happen when a server goes down (reboot, crash, network
> partition) during a fop execution. Once the brick is back up, dht will
> heal the entry so that operation goes smoothly.
> If there is a resultant error, it should have been logged in the
> client log.
Understood.
Thank you for the prompt reply.
--
Danti Gionatan
Supporto Tecnico
Assyoma S.r.l. - www.assyoma.it [1]
email: g.danti@xxxxxxxxxx - info@xxxxxxxxxx
GPG public key ID: FF5F32A8
________
Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users
________ Community Meeting Calendar: Schedule - Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC Bridge: https://bluejeans.com/441850968 Gluster-users mailing list Gluster-users@xxxxxxxxxxx https://lists.gluster.org/mailman/listinfo/gluster-users