Re: Libvirt and Ceph: libvirtd tries to open random RBD images

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello Eugen,
Thanks for the response. No, we don't have a pool named "rbd" or any
namespaces defined. I'll figure out a way to increase libvirtd debug level
and check.

Regards,
Jayanth

On Mon, Dec 4, 2023 at 3:16 PM Eugen Block <eblock@xxxxxx> wrote:

> Hi,
>
> I'm not familiar with Cloudstack, I was just wondering if it tries to
> query the pool "rbd"? Some tools refer to a default pool "rbd" if no
> pool is specified. Do you have an "rbd" pool in that cluster?
> Another thought are namespaces, do you have those defined? Can you
> increase the debug level to see what exactly it tries to do?
>
> Regards,
> Eugen
>
> Zitat von Jayanth Reddy <jayanthreddy5666@xxxxxxxxx>:
>
> > Hello Users,
> > We're using libvirt with KVM and the orchestrator is Cloudstack. I raised
> > the issue already at Cloudstack at
> > https://github.com/apache/cloudstack/issues/8211 but appears to be at
> > libvirtd. Did the same in libvirt ML at
> >
> https://lists.libvirt.org/archives/list/users@xxxxxxxxxxxxxxxxx/thread/SA2I4QZGVVEIKPJU7E2KAFYYFZLJZDMV/
> > but I'm now here looking for answers.
> >
> > Below is our environment & issue description:
> >
> > Ceph: v17.2.0
> > Pool: replicated
> > Number of block images in this pool: more than 1250
> >
> > # virsh pool-info c15508c7-5c2c-317f-aa2e-29f307771415
> > Name:           c15508c7-5c2c-317f-aa2e-29f307771415
> > UUID:           c15508c7-5c2c-317f-aa2e-29f307771415
> > State:          running
> > Persistent:     no
> > Autostart:      no
> > Capacity:       1.25 PiB
> > Allocation:     489.52 TiB
> > Available:      787.36 TiB
> >
> > # kvm --version
> > QEMU emulator version 4.2.1 (Debian 1:4.2-3ubuntu6.27)
> > Copyright (c) 2003-2019 Fabrice Bellard and the QEMU Project developers
> >
> > # libvirtd --version
> > libvirtd (libvirt) 6.0.0
> >
> > It appears that one of our Cloudstack KVM clusters having 8 hosts is
> having
> > the issue. We have HCI on these 8 hosts and there are around 700+ VMs
> > running. But strange enough, there are these logs like below on hosts.
> >
> >
> > Oct 25 13:38:11 hv-01 libvirtd[9464]: failed to open the RBD image
> > '087bb114-448a-41d2-9f5d-6865b62eed15': No such file or directory
> > Oct 25 20:35:22 hv-01 libvirtd[9464]: failed to open the RBD image
> > 'ccc1168a-5ffa-4b6d-a953-8e0ac788ebc5': No such file or directory
> > Oct 26 09:48:33 hv-01 libvirtd[9464]: failed to open the RBD image
> > 'a3fe82f8-afc9-4604-b55e-91b676514a18': No such file or directory
> >
> > We've got DNS servers on which there is an`A` record resolving to all the
> > IPv4 Addresses of 5 monitors and there have not been any issues with the
> > DNS resolution. But the issue of "failed to open the RBD image
> > 'ccc1168a-5ffa-4b6d-a953-8e0ac788ebc5': No such file or directory" gets
> > more weird because the VM that is making use of that RBD image lets say
> > "087bb114-448a-41d2-9f5d-6865b62eed15" is running on an altogether
> > different host like "hv-06". On further inspection of that specific
> Virtual
> > Machine, it has been running on that host "hv-06" for more than 4 months
> or
> > so. Fortunately, the Virtual Machine has no issues and has been running
> > since then. There are absolutely no issues with any of the Virtual
> Machines
> > because of these warnings.
> >
> > From libvirtd mailing lists, one of the community members helped me
> > understand that libvirt only tries to get the info of the images and
> > doesn't open for reading or writing. All hosts where there is libvirtd
> > tries doing the same. We manually did "virsh pool-refresh" which
> CloudStack
> > itself takes care of at regular intervals and the warning messages still
> > appear. Please help me find the cause and let me know if further
> > information is needed.
> >
> > Thanks,
> > Jayanth Reddy
> > _______________________________________________
> > ceph-users mailing list -- ceph-users@xxxxxxx
> > To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux