Gluster CLI problem during oVirt Installation

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]


Hi everybody!

I have a problem with connecting a gluster storage domain during a new installation of oVirt 4.4. I think i tracked it down to a problem with Gluster CLI, error message is "Error outputting to xml", thats why i reach out for help here. [1]

During the setup process the following steps seem to be made (at least thats what i derived out of my logs):

- The Ansible setup script request GlusterFS address and boot options from the user

- VDSM asks for connected storage pools and gets back an empty list:

2023-07-18 16:32:35,348+0200 INFO (vmrecovery) [vdsm.api] START getConnectedStoragePoolsList() from=internal, task_id=836164a8-882e-47a4-8f22-689f22425a6f (api:48) 2023-07-18 16:32:35,348+0200 INFO (vmrecovery) [vdsm.api] FINISH getConnectedStoragePoolsList return={'poollist': []} from=internal, task_id=836164a8-882e-47a4-8f22-689f22425a6f (api:54)

- The newly deployed engine wants to connect to the storage with infos from the user:

2023-07-18 16:32:35,512+02 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (default task-1) [85910c72-2f78-4f1c-a2f4-cea3e57d2b49] START, ConnectStorageServerVDSCommand(HostName = ovirt.martinwi.local, StorageServerConnectionManagementVDSParameters:{hostId='4163f25c-60a5-45df-a954-6f8956103c23', storagePoolId='00000000-0000-0000-0000-000000000000', storageType='GLUSTERFS', connectionList='[StorageServerConnections:{id='null', connection='gluster1.martinwi.local:/gv3', iqn='null', vfsType='glusterfs', mountOptions='', nfsVersion='null', nfsRetrans='null', nfsTimeo='null', iface='null', netIfaceName='null'}]', sendNetworkEventOnFailure='true'}), log id: 7bfc54aa

- VDSM wants to connect to the storage:

2023-07-18 16:32:35,531+0200 INFO (jsonrpc/6) [vdsm.api] START connectStorageServer(domType=7, spUUID='00000000-0000-0000-0000-000000000000', conList=[{'password': '********', 'vfs_type': 'glusterfs', 'port': '', 'iqn': '', 'connection': 'gluster1.martinwi.local:/gv3', 'ipv6_enabled': 'false', 'id': '00000000-0000-0000-0000-000000000000', 'user': '', 'tpgt': '1'}]) from=::ffff:,47396, flow_id=85910c72-2f78-4f1c-a2f4-cea3e57d2b49, task_id=3c24b1e6-fe6a-4987-882a-4392fc920b7e (api:48)

- And finally SuperVDSM calls Gluster CLI to make the request:

MainProcess|jsonrpc/6::DEBUG::2023-07-18 16:32:35,533::commands::153::common.commands::(start) /usr/bin/taskset --cpu-list 0-3 /usr/sbin/gluster --mode=script volume info --remote-host=gluster1.martinwi.local gv3 --xml (cwd None)

- And the request fails:

[2023-07-18 14:32:35.546068] I [cli.c:722:cli_rpc_init] 0-cli: Connecting to remote glusterd at gluster1.martinwi.local
[2023-07-18 14:32:35.630925] I [cli-rpc-ops.c:756:gf_cli_get_volume_cbk] 0-cli: Received resp to get vol: 0 [2023-07-18 14:32:35.631026] E [cli-rpc-ops.c:825:gf_cli_get_volume_cbk] 0-cli: Error outputting to xml [2023-07-18 14:32:35.631083] I [input.c:31:cli_batch] 0-: Exiting with: -2

I suspected a syntax problem and tried the command "gluster volume info gv3 --mode=script --remote-host=gluster1.martinwi.local --xml" manually and some other variants on the oVirt node but got the same error message. On the Gluster servers i can output the XML.

There doesnt seem to be a network issue, since i can see packets with the request and the reply with proper volume information (the volume options are, btw, compliant to the docs, "storage.owner-gid: 36" "storage.owner-uid: 36" etc).

Used versions:

oVirt Node: GlusterFS 8.6
Gluster servers: GlusterFS 11.0

The Gluster servers worked fine with oVirt 4.3 so i dont think there is a version incompatibility.

Does anyone have an idea whats going wrong here or where to dig further?

Many thanks in advance :)


[1] Posted a few days ago on the oVirt Users Mailing list but no response yet:


Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Gluster-users mailing list

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux