Btw: dd bs=1M count=2048 if=/dev/rbd6 of=/dev/null => gives me 50MB/sec. So reading the block device seems to work?! On Fri, Jun 25, 2021 at 12:39 PM Ml Ml <mliebherr99@xxxxxxxxxxxxxx> wrote: > > I started the mount 15mins ago.: > mount -nv /dev/rbd6 /mnt/backup-cluster5 > > ps: > root 1143 0.2 0.0 8904 3088 pts/0 D+ 12:17 0:03 | > \_ mount -nv /dev/rbd6 /mnt/backup-cluster5 > > > There is no timout or ANY msg in dmesg until now. > > strace -p 1143 : seems to do nothing. > iotop --pid=1143: uses about 50KB/sec > > it might mount after a few hours i gues... :-( > > On Fri, Jun 25, 2021 at 11:39 AM Ilya Dryomov <idryomov@xxxxxxxxx> wrote: > > > > On Fri, Jun 25, 2021 at 11:25 AM Ml Ml <mliebherr99@xxxxxxxxxxxxxx> wrote: > > > > > > The rbd Client is not on one of the OSD Nodes. > > > > > > I now added a "backup-proxmox/cluster5a" to it and it works perfectly. > > > Just that one rbd image sucks. The last thing i remember was to resize > > > the Image from 6TB to 8TB and i then did a xfs_grow on it. > > > > > > Does that ring a bell? > > > > It does seem like a filesystem problem so far but you haven't posted > > dmesg or other details. "mount" will not time out, if it's not returning > > due to hanging somewhere you would likely get "task ... blocked for ..." > > splats in dmesg. > > > > Thanks, > > > > Ilya > > > > > > > > > > > On Wed, Jun 23, 2021 at 11:25 AM Ilya Dryomov <idryomov@xxxxxxxxx> wrote: > > > > > > > > On Wed, Jun 23, 2021 at 9:59 AM Matthias Ferdinand <mf+ml.ceph@xxxxxxxxx> wrote: > > > > > > > > > > On Tue, Jun 22, 2021 at 02:36:00PM +0200, Ml Ml wrote: > > > > > > Hello List, > > > > > > > > > > > > oversudden i can not mount a specific rbd device anymore: > > > > > > > > > > > > root@proxmox-backup:~# rbd map backup-proxmox/cluster5 -k > > > > > > /etc/ceph/ceph.client.admin.keyring > > > > > > /dev/rbd0 > > > > > > > > > > > > root@proxmox-backup:~# mount /dev/rbd0 /mnt/backup-cluster5/ > > > > > > (just never times out) > > > > > > > > > > > > > > > Hi, > > > > > > > > > > there used to be some kernel lock issues when the kernel rbd client > > > > > tried to access an OSD on the same machine. Not sure if these issues > > > > > still exist (but I would guess so) and if you use your proxmox cluster > > > > > in a hyperconverged manner (nodes providing VMs and storage service at > > > > > the same time) you may just have been lucky that it had worked before. > > > > > > > > > > Instead of the kernel client mount you can try to export the volume as > > > > > an NBD device (https://docs.ceph.com/en/latest/man/8/rbd-nbd/) and > > > > > mounting that. rbd-nbd runs in userspace and should not have that > > > > > locking problem. > > > > > > > > rbd-nbd is also susceptible to locking up in such setups, likely more > > > > so than krbd. Don't forget that it also has a kernel component and > > > > there are actually more opportunities for things to go sideways/lock up > > > > because there is an extra daemon involved allocating some additional > > > > memory for each I/O request. > > > > > > > > Thanks, > > > > > > > > Ilya _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx