managed block storage stopped working

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



...sorta. I have a ovirt-4.4.2 system installed a couple of years ago and set up managed block storage using ceph Octopus[1]. This has been working well since it was originally set up.

In late November we had some network issues on one of our ovirt hosts, as well a seperate network issue that took many ceph OSDs offline. This was eventually recovered, and 2 of the 3 VMs that use managed block storage started working again. The third did not.

We eventually discovered that ovirt was not able to access the ceph rbd images, which is odd because two VMs are actively reading and writing to ceph block devices. We are also no longer able to create new ovirt disks using the managed block driver.

/var/log/cinderlib/cinderlib.log on the ovirt-engine is empty.

/var/log/ovirt-engine/engine.log shows the attempt to connect to the storage, which eventually errors out with no helpful message:

2022-01-07 11:36:47,398-06 INFO [org.ovirt.engine.core.bll.storage.disk.AttachDiskToVmCommand] (default task-1) [6613fac6-dd2f-4d22-993b-d805b2b572cd] Running command: AttachDiskToVmCommand internal: false. Entities affected : ID: 804b259a-c580-436b-a5ba-decdd0a2ccbd Type: VMAction group CONFIGURE_VM_STORAGE with role type USER, ID: 32c537e9-42cf-4648-b33b-2723374416e1 Type: DiskAction group ATTACH_DISK with role type USER 2022-01-07 11:36:47,415-06 INFO [org.ovirt.engine.core.bll.storage.disk.managedblock.ConnectManagedBlockStorageDeviceCommand] (default task-1) [46265b18] Running command: ConnectManagedBlockStorageDeviceCommand internal: true. 2022-01-07 11:39:00,248-06 INFO [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService] (EE-ManagedScheduledExecutorService-engineThreadMonitoringThreadPool-Thread-1) [] Thread pool 'default' is using 0 threads out of 1, 5 threads waiting for tasks. 2022-01-07 11:39:00,248-06 INFO [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService] (EE-ManagedScheduledExecutorService-engineThreadMonitoringThreadPool-Thread-1) [] Thread pool 'engine' is using 0 threads out of 500, 32 threads waiting for tasks and 0 tasks in queue. 2022-01-07 11:39:00,248-06 INFO [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService] (EE-ManagedScheduledExecutorService-engineThreadMonitoringThreadPool-Thread-1) [] Thread pool 'engineScheduledThreadPool' is using 0 threads out of 1, 100 threads waiting for tasks. 2022-01-07 11:39:00,248-06 INFO [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService] (EE-ManagedScheduledExecutorService-engineThreadMonitoringThreadPool-Thread-1) [] Thread pool 'engineThreadMonitoringThreadPool' is using 1 threads out of 1, 0 threads waiting for tasks. 2022-01-07 11:41:19,774-06 INFO [org.ovirt.engine.core.bll.aaa.LoginOnBehalfCommand] (default task-6) [103222ef] Running command: LoginOnBehalfCommand internal: true. 2022-01-07 11:41:19,832-06 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-6) [103222ef] EVENT_ID: USER_LOGIN_ON_BEHALF(1,401), Executed login on behalf - for user admin. 2022-01-07 11:41:19,848-06 INFO [org.ovirt.engine.core.bll.aaa.LogoutSessionCommand] (default task-6) [32106489] Running command: LogoutSessionCommand internal: true. 2022-01-07 11:41:19,853-06 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-6) [32106489] EVENT_ID: USER_VDC_LOGOUT(31), User SYSTEM connected from 'UNKNOWN' using session 'pSzmWpAZSakSozpj4HQF2bic6EKUClj5wni+i9GPIlmdLIqfnAG9LYqb2MbO34fOuskBvjmTPbe4WRGFWUfmbQ==' logged out. 2022-01-07 11:41:47,405-06 ERROR [org.ovirt.engine.core.bll.storage.disk.AttachDiskToVmCommand] (Transaction Reaper Worker 0) [] Transaction rolled-back for command 'org.ovirt.engine.core.bll.storage.disk.AttachDiskToVmCommand'.

Where else can I look to find out why the managed block storage isn't accessible anymore?

--Mike

[1]https://lists.ovirt.org/archives/list/users@xxxxxxxxx/thread/KHCLXVOCELHOR3G7SH3GDPGRKITCW7UY/
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux