> Are you sure that there is no heals pending at the time of the power up I was watching heals when the problem was persisting and it was all clear. This was a great suggestion though. > I checked my oVirt-based gluster and the only difference is: > cluster.gra > nular-entry-heal: enable > The options seem fine. > > libglusterfs0-7.2-4723.1520.210122T1700.a.sles15sp2hpe.x86_64 > > glusterfs-7.2-4723.1520.210122T1700.a.sles15sp2hpe.x86_64 > > python3-gluster-7.2-4723.1520.210122T1700.a.sles15sp2hpe.noarch > This one is quite old although it never caused any troubles with my > oVirt VMs. Either try with latest v7 or even v8.3 . I can try a newer version. The issue is we have to do massive testing with thousands of nodes to validate function and that isn't always available. So we tend to latch on to a good one and stage an upgrade when we have a system big enough in the factory. In this case though, the use case is a single VM. If I could find a way to reproduce the problem I would be able to know if upgrading helped. These hard to reproduce problems are painful!! We keep hitting it in places but triggering has been elusive. THANK YOU for replying back. I will continue to try to reproduce the problem. If I get it back to consistent fail, I'll try updating gluster then and take another closer look at the logs and post them. Erik ________ Community Meeting Calendar: Schedule - Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC Bridge: https://meet.google.com/cpu-eiue-hvk Gluster-users mailing list Gluster-users@xxxxxxxxxxx https://lists.gluster.org/mailman/listinfo/gluster-users