On 19/1/25 16:26, Stuart Longland VK4MSL wrote:
I've loaded a new machine (a MSI Cubi 5 mini PC) up with AlpineLinux
3.21. Boot disk is a 240GB SATA SSD, and there's a 1TB nVME for local
VM storage. My intent is to allow VMs to mount RBDs for back-up
purposes. The machine has two Ethernet interfaces (a 2.5Gbps and a
1Gbps link), one will be the "front-end" used by the VMs, the other will
be a "back-end" link to talk to Ceph and administer the host.
- OpenVSwitch 2.17.11 is deployed with two bridges
- libvirtd 10.9.0 installed
- a LVM pool called 'data' has been created on nVME
- Ceph 19.2.0 is installed (libvirtd is linked to this version of librbd)
- /etc/ceph has been cloned from my existing working compute node
I reproduced the exact same result re-installing a new node from scratch
then restoring the `/etc/ceph`, `/etc/libvirt` and `/var/lib/libvirt`
directories from a back-up.
I've heard nothing about how I fix this or get more information.
How do I find the cause for RBD pools not starting?
Where are the logs kept? (They aren't in syslog, they aren't in the
usual libvirtd logs, even when everything is turned up to maximum.)
If you need to know something I missed, please tell me. I don't expect
people to read minds, but please don't ask me to do the same.
--
Stuart Longland (aka Redhatter, VK4MSL)
I haven't lost my mind...
...it's backed up on a tape somewhere.