I think the following messages are not harmful;
[2021-01-26 19:28:40.652898] W [MSGID: 101159] [inode.c:1212:__inode_unlink] 0-inode: be318638-e8a0-4c6d-977d-7a937aa84806/48bb5288-e27e-46c9-9f7c-944a804df361.1: dentry not found in 48bb5288-e27e-46c9-9f7c-944a804df361
[2021-01-26 19:28:40.652975] W [MSGID: 101159] [inode.c:1212:__inode_unlink] 0-inode: be318638-e8a0-4c6d-977d-7a937aa84806/931508ed-9368-4982-a53e-7187a9f0c1f9.3: dentry not found in 931508ed-9368-4982-a53e-7187a9f0c1f9
[2021-01-26 19:28:40.653047] W [MSGID: 101159] [inode.c:1212:__inode_unlink] 0-inode: be318638-e8a0-4c6d-977d-7a937aa84806/e808ecab-2e70-4ef3-954e-ce1b78ed8b52.4: dentry not found in e808ecab-2e70-4ef3-954e-ce1b78ed8b52
[2021-01-26 19:28:40.653102] W [MSGID: 101159] [inode.c:1212:__inode_unlink] 0-inode: be318638-e8a0-4c6d-977d-7a937aa84806/2c62c383-d869-4655-9c03-f08a86a874ba.6: dentry not found in 2c62c383-d869-4655-9c03-f08a86a874ba
[2021-01-26 19:28:40.653169] W [MSGID: 101159] [inode.c:1212:__inode_unlink] 0-inode: be318638-e8a0-4c6d-977d-7a937aa84806/556ffbc9-bcbe-445a-93f5-13784c5a6df1.2: dentry not found in 556ffbc9-bcbe-445a-93f5-13784c5a6df1
[2021-01-26 19:28:40.653218] W [MSGID: 101159] [inode.c:1212:__inode_unlink] 0-inode: be318638-e8a0-4c6d-977d-7a937aa84806/5d414e7c-335d-40da-bb96-6c427181338b.5: dentry not found in 5d414e7c-335d-40da-bb96-6c427181338b
[2021-01-26 19:28:40.653314] W [MSGID: 101159] [inode.c:1212:__inode_unlink] 0-inode: be318638-e8a0-4c6d-977d-7a937aa84806/43364dc9-2d8e-4fca-89d2-e11dee6fcfd4.8: dentry not found in 43364dc9-2d8e-4fca-89d2-e11dee6fcfd4
[2021-01-26 19:28:40.652975] W [MSGID: 101159] [inode.c:1212:__inode_unlink] 0-inode: be318638-e8a0-4c6d-977d-7a937aa84806/931508ed-9368-4982-a53e-7187a9f0c1f9.3: dentry not found in 931508ed-9368-4982-a53e-7187a9f0c1f9
[2021-01-26 19:28:40.653047] W [MSGID: 101159] [inode.c:1212:__inode_unlink] 0-inode: be318638-e8a0-4c6d-977d-7a937aa84806/e808ecab-2e70-4ef3-954e-ce1b78ed8b52.4: dentry not found in e808ecab-2e70-4ef3-954e-ce1b78ed8b52
[2021-01-26 19:28:40.653102] W [MSGID: 101159] [inode.c:1212:__inode_unlink] 0-inode: be318638-e8a0-4c6d-977d-7a937aa84806/2c62c383-d869-4655-9c03-f08a86a874ba.6: dentry not found in 2c62c383-d869-4655-9c03-f08a86a874ba
[2021-01-26 19:28:40.653169] W [MSGID: 101159] [inode.c:1212:__inode_unlink] 0-inode: be318638-e8a0-4c6d-977d-7a937aa84806/556ffbc9-bcbe-445a-93f5-13784c5a6df1.2: dentry not found in 556ffbc9-bcbe-445a-93f5-13784c5a6df1
[2021-01-26 19:28:40.653218] W [MSGID: 101159] [inode.c:1212:__inode_unlink] 0-inode: be318638-e8a0-4c6d-977d-7a937aa84806/5d414e7c-335d-40da-bb96-6c427181338b.5: dentry not found in 5d414e7c-335d-40da-bb96-6c427181338b
[2021-01-26 19:28:40.653314] W [MSGID: 101159] [inode.c:1212:__inode_unlink] 0-inode: be318638-e8a0-4c6d-977d-7a937aa84806/43364dc9-2d8e-4fca-89d2-e11dee6fcfd4.8: dentry not found in 43364dc9-2d8e-4fca-89d2-e11dee6fcfd4
Also, I would like to point that I have VMs with large disks 1TB and 2TB, and have no issues. definitely would upgrade Gluster version like let's say at least 7.9.
Amar also asked a question regarding enabling Sharding in the volume after creating the VMs disks, which would certainly mess up the volume if that what happened.
On Wed, Jan 27, 2021 at 5:28 PM Erik Jacobson <erik.jacobson@xxxxxxx> wrote:
> > Shortly after the sharded volume is made, there are some fuse mount
> > messages. I'm not 100% sure if this was just before or during the
> > big qemu-img command to make the 5T image
> > (qemu-img create -f raw -o preallocation=falloc
> > /adminvm/images/adminvm.img 5T)
> Any reason to have a single disk with this size ?
> Usually in any
> virtualization I have used , it is always recommended to keep it lower.
> Have you thought about multiple disks with smaller size ?
Yes, because the actual virtual machine is an admin node/head node cluster
manager for a supercomputer that hosts big OS images and drives
multi-thousand-node-clusters (boot, monitoring, image creation,
distribution, sometimes NFS roots, etc) . So this VM is a biggie.
We could make multiple smaller images but it would be very painful since
it differs from the normal non-VM setup.
So unlike many solutions where you have lots of small VMs with their
images small images, this solution is one giant VM with one giant image.
We're essentially using gluster in this use case (as opposed to others I
have posted about in the past) for head node failover (combined with
pacemaker).
> Also worth
> noting is that RHII is supported only when the shard size is 512MB, so
> it's worth trying bigger shard size .
I have put larger shard size and newer gluster version on the list to
try. Thank you! Hoping to get it failing again to try these things!
Respectfully
Mahdi
________ Community Meeting Calendar: Schedule - Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC Bridge: https://meet.google.com/cpu-eiue-hvk Gluster-users mailing list Gluster-users@xxxxxxxxxxx https://lists.gluster.org/mailman/listinfo/gluster-users