Hi,
We are experiencing a weird issue after upgrading our clusters from ceph
luminous to nautilus 14.2.9 - I am not even sure if this is ceph related
but this started to happen exactly after we upgraded, so, I am trying my
luck here.
We have one ceph rbd pool size 3 min size 2 from all bluestore osds (KRBD)
I will try to be clear enough.. though I cannot understand exactly whats
happening or whats causing the issue.
So, we have 1 virtual machine which uses a rbd image of 2TB -
virtio-scsi device.
Inside the VM we are trying to create ploop devices to be used for/by
containers(inside the VM on the 2TB rbd image QEMU DISK).
There is no way we can create ploop devices, it always crash, please
check the crash below:
https://pastebin.com/9khp9XS3 - sdb in the crash is the 2TB rbd image
which the VM uses.
There are no other read/write errors, we have health_ok, all OSDs are
fine, no errors on any of the phisical disks - this happens only when we
want to create ploop devices inside a VM and right after we upgraded our
cluster to nautilus 14.2.9.
I also did new images/other hosts.. same result. Did try a lot of
different versions of ploop packages, same result.
I would appreciate if someone else has encountered something similar and
if there is a workaround.
--
Best Regards,
------------------------------------------------------------------------
Daniel Stan
Senior System Administrator | NAV Communications (RO)
Office: +40 (21) 655-55-55 | E-Mail: daniel@xxxxxx
Site: www.nav.ro <https://www.nav.ro> | Client: https://client.ro
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx