btw, part of brick log:
[2020-08-12 07:08:32.646082] I [MSGID: 115029]
[server-handshake.c:561:server_setvolume] 0-pool-server: accepted client
from CTX_ID:9eea4bec-a522-4a29-be83-5d66c04ce6ee-GRAPH_ID:0-PID:765
2-HOST:nabu-PC_NAME:pool-client-2-RECON_NO:-0 (version: 8.0) with subvol
/wall/pool/brick
[2020-08-12 07:08:32.669522] E [MSGID: 113040]
[posix-inode-fd-ops.c:1727:posix_readv] 0-pool-posix: read failed on
gfid=231fbad6-8d8d-4555-8137-2362a06fc140, fd=0x7f342800ca38, offset=0
size=512, buf=0x7f345450f000 [Invalid argument]
[2020-08-12 07:08:32.669565] E [MSGID: 115068]
[server-rpc-fops_v2.c:1374:server4_readv_cbk] 0-pool-server: READ info
[{frame=34505}, {READV_fd_no=0}, {uuid_utoa=231fbad6-8d8d-4555-8137-2
362a06fc140},
{client=CTX_ID:9eea4bec-a522-4a29-be83-5d66c04ce6ee-GRAPH_ID:0-PID:7652-HOST:nabu-PC_NAME:pool-client-2-RECON_NO:-0},
{error-xlator=pool-posix}, {errno=22}, {error=Invalid a
rgument}]
[2020-08-12 07:08:33.241625] E [MSGID: 113040]
[posix-inode-fd-ops.c:1727:posix_readv] 0-pool-posix: read failed on
gfid=231fbad6-8d8d-4555-8137-2362a06fc140, fd=0x7f342800ca38, offset=0
size=512, buf=0x7f345450f000 [Invalid argument]
[2020-08-12 07:08:33.241669] E [MSGID: 115068]
[server-rpc-fops_v2.c:1374:server4_readv_cbk] 0-pool-server: READ info
[{frame=34507}, {READV_fd_no=0}, {uuid_utoa=231fbad6-8d8d-4555-8137-2
362a06fc140},
{client=CTX_ID:9eea4bec-a522-4a29-be83-5d66c04ce6ee-GRAPH_ID:0-PID:7652-HOST:nabu-PC_NAME:pool-client-2-RECON_NO:-0},
{error-xlator=pool-posix}, {errno=22}, {error=Invalid a
rgument}]
[2020-08-12 07:09:45.897326] W [socket.c:767:__socket_rwv]
0-tcp.pool-server: readv on 192.168.222.25:49081 failed (No data available)
[2020-08-12 07:09:45.897357] I [MSGID: 115036]
[server.c:498:server_rpc_notify] 0-pool-server: disconnecting connection
[{client-uid=CTX_ID:9eea4bec-a522-4a29-be83-5d66c04ce6ee-GRAPH_ID:0
-PID:7652-HOST:nabu-PC_NAME:pool-client-2-RECON_NO:-0}]
Thank you!
12.08.2020 11:00, Dmitry Melekhov пишет:
Hello!
We are testing gluster 8 on centos 8.2 and we try to use volume
created over vdo.
This is 2 nodes setup.
There is lvm created over vdo, and xfs filesystem.
Test vm runs just fine if we run vm over fuse:
<disk type='file' device='disk'>
<driver name='qemu' type='raw' cache='directsync'/>
<source file='/root/pool/stewjon.img'/>
<target dev='vda' bus='virtio'/>
/root/pool/ is fuse mount.
but if we try to run:
<disk type='network' device='disk'>
<driver name='qemu' type='raw' cache='directsync'/>
<source protocol='gluster' name='pool/stewjon.img'>
<host name='127.0.0.1'/>
</source>
<target dev='vda' bus='virtio'/>
</disk>
then vm boot dies, qemu says- no bootable device.
It works without cache='directsync' though.
But live migration does not work.
btw, everything work OK if we run VM on gluster volume without vdo...
Any ideas what can cause this and how it can be fixed?
Thank you!
________
Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users