After deleting the file, output of heal info is clear.
Neither did I, this was a completely fresh setup with 1-2 VMs and 1-2 Proxmox LXC templates. I let it run for a few days and at some point it had the mentioned state. I continue to monitor and start with fill the bricks with data.
Thanks for your help!
Thanks for your help!
Am Mo., 1. Nov. 2021 um 02:54 Uhr schrieb Ravishankar N <ravishankar.n@xxxxxxxxxxx>:
On Mon, Nov 1, 2021 at 12:02 AM Thorsten Walk <darkiop@xxxxxxxxx> wrote:Hi Ravi, the file only exists at pve01 and since only once:┬[19:22:10] [ssh:root@pve01(192.168.1.50): ~ (700)]
╰─># stat /data/glusterfs/.glusterfs/26/c5/26c5396c-86ff-408d-9cda-106acd2b0768
File: /data/glusterfs/.glusterfs/26/c5/26c5396c-86ff-408d-9cda-106acd2b0768
Size: 6 Blocks: 8 IO Block: 4096 regular file
Device: fd12h/64786d Inode: 528 Links: 1
Access: (0644/-rw-r--r--) Uid: ( 0/ root) Gid: ( 0/ root)
Access: 2021-10-30 14:34:50.385893588 +0200
Modify: 2021-10-27 00:26:43.988756557 +0200
Change: 2021-10-27 00:26:43.988756557 +0200
Birth: -┬[19:24:41] [ssh:root@pve01(192.168.1.50): ~ (700)]
╰─># ls -l /data/glusterfs/.glusterfs/26/c5/26c5396c-86ff-408d-9cda-106acd2b0768
.rw-r--r-- root root 6B 4 days ago /data/glusterfs/.glusterfs/26/c5/26c5396c-86ff-408d-9cda-106acd2b0768
┬[19:24:54] [ssh:root@pve01(192.168.1.50): ~ (700)]
╰─># cat /data/glusterfs/.glusterfs/26/c5/26c5396c-86ff-408d-9cda-106acd2b0768
28084Hi Thorsten, you can delete the file. From the file size and contents, it looks like it belongs to ovirt sanlock. Not sure why you ended up in this situation (maybe unlink partially failed on this brick?). You can check the mount, brick and self-heal daemon logs for this gfid to see if you find related error/warning messages.-Ravi
________ Community Meeting Calendar: Schedule - Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC Bridge: https://meet.google.com/cpu-eiue-hvk Gluster-users mailing list Gluster-users@xxxxxxxxxxx https://lists.gluster.org/mailman/listinfo/gluster-users