The rbd Client is not on one of the OSD Nodes. I now added a "backup-proxmox/cluster5a" to it and it works perfectly. Just that one rbd image sucks. The last thing i remember was to resize the Image from 6TB to 8TB and i then did a xfs_grow on it. Does that ring a bell? On Wed, Jun 23, 2021 at 11:25 AM Ilya Dryomov <idryomov@xxxxxxxxx> wrote: > > On Wed, Jun 23, 2021 at 9:59 AM Matthias Ferdinand <mf+ml.ceph@xxxxxxxxx> wrote: > > > > On Tue, Jun 22, 2021 at 02:36:00PM +0200, Ml Ml wrote: > > > Hello List, > > > > > > oversudden i can not mount a specific rbd device anymore: > > > > > > root@proxmox-backup:~# rbd map backup-proxmox/cluster5 -k > > > /etc/ceph/ceph.client.admin.keyring > > > /dev/rbd0 > > > > > > root@proxmox-backup:~# mount /dev/rbd0 /mnt/backup-cluster5/ > > > (just never times out) > > > > > > Hi, > > > > there used to be some kernel lock issues when the kernel rbd client > > tried to access an OSD on the same machine. Not sure if these issues > > still exist (but I would guess so) and if you use your proxmox cluster > > in a hyperconverged manner (nodes providing VMs and storage service at > > the same time) you may just have been lucky that it had worked before. > > > > Instead of the kernel client mount you can try to export the volume as > > an NBD device (https://docs.ceph.com/en/latest/man/8/rbd-nbd/) and > > mounting that. rbd-nbd runs in userspace and should not have that > > locking problem. > > rbd-nbd is also susceptible to locking up in such setups, likely more > so than krbd. Don't forget that it also has a kernel component and > there are actually more opportunities for things to go sideways/lock up > because there is an extra daemon involved allocating some additional > memory for each I/O request. > > Thanks, > > Ilya _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx