Re: Upgrade from Luminous to Nautilus 14.2.9 RBD issue?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

We use KRBD.

Our ceph nautilus storage cluster is used for our proxmox 6.x cloud VM disks.

This is a bit different as we are not experiencing any errors during boots/reboots we had no unplanned outage, etc., we just safely upgraded from luminous to nautilus and we only experience this when we try to create openvz ploops.

I have checked the list but I cant seem to find a solution here.

So, we are running nautilus 14.2.9.
You can deploy on a VM:
https://download.openvz.org/virtuozzo/releases/openvz-7.0.14-136/x86_64/iso/openvz-iso-7.0.14.iso

And just try to create an openvz template cache which will try to add a ploop.
vzpkg install template centos-7-x86_64
vzpkg create cache centos-7-x86_64

Or check complete usage: https://docs.virtuozzo.com/virtuozzo_hybrid_server_7_command_line_reference/managing-containers/ez-template-management-utilities.html

Wrote that here as I thought maybe someone could try reproducing this.

Best Regards,

------------------------------------------------------------------------

Daniel Stan

Senior System Administrator | NAV Communications (RO)

Office: +40 (21) 655-55-55 | E-Mail: daniel@xxxxxx

Site: www.nav.ro <https://www.nav.ro> | Client: https://client.ro

On 01/07/2020 15:23, Jason Dillaman wrote:
On Wed, Jul 1, 2020 at 3:23 AM Daniel Stan - nav.ro <daniel@xxxxxx> wrote:
Hi,

We are experiencing a weird issue after upgrading our clusters from ceph
luminous to nautilus 14.2.9 - I am not even sure if this is ceph related
but this started to happen exactly after we upgraded, so, I am trying my
luck here.

We have one ceph rbd pool size 3 min size 2 from all bluestore osds (KRBD)
Are you using krbd or librbd via QEMU?

I will try to be clear enough.. though I cannot understand exactly whats
happening or whats causing the issue.

So, we have 1 virtual machine which uses a rbd image of 2TB -
virtio-scsi device.
Inside the VM we are trying to create ploop devices to be used for/by
containers(inside the VM on the 2TB rbd image QEMU DISK).

There is no way we can create ploop devices, it always crash, please
check the crash below:

https://pastebin.com/9khp9XS3 - sdb in the crash is the 2TB rbd image
which the VM uses.
This sounds like a permissions issue for blacklisting dead clients [1]

There are no other read/write errors, we have health_ok, all OSDs are
fine, no errors on any of the phisical disks - this happens only when we
want to create ploop devices inside a VM and right after we upgraded our
cluster to nautilus 14.2.9.
I also did new images/other hosts.. same result. Did try a lot of
different versions of ploop packages, same result.

I would appreciate if someone else has encountered something similar and
if there is a workaround.

--

Best Regards,

------------------------------------------------------------------------

Daniel Stan

Senior System Administrator | NAV Communications (RO)

Office: +40 (21) 655-55-55 | E-Mail: daniel@xxxxxx

Site: www.nav.ro <https://www.nav.ro> | Client: https://client.ro

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

[1] http://lists.ceph.com/pipermail/ceph-users-ceph.com/2018-July/027862.html

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux