On Tue, Jun 22, 2021 at 10:12 AM Ml Ml <mliebherr99@xxxxxxxxxxxxxx> wrote: > ceph -s is healthy. I started to do a xfs_repair on that block device > now which seems to do something...: > > - agno = 1038 > - agno = 1039 > - agno = 1040 > - agno = 1041 > - agno = 1042 > - agno = 1043 > - agno = 1044 > - agno = 1045 > - agno = 1046 > - agno = 1047 > - agno = 1048 > - agno = 1049 > - agno = 1050 > > (i am new to xfs) but hat proofes that the block device is alive and > accessible? > Maybe I have a Filesystem problem? > I would look in syslog, dmesg, and use a -v flag with mount to see if anything else comes up. Maybe parted /dev/rbd0 to see if you can access the partition table. xfs_repair -L can address any issues with XFS, but it should not break on its own > > On Tue, Jun 22, 2021 at 3:33 PM Alex Gorbachev <ag@xxxxxxxxxxxxxxxxxxx> > wrote: > > > > On Tue, Jun 22, 2021 at 8:36 AM Ml Ml <mliebherr99@xxxxxxxxxxxxxx> > wrote: > >> > >> Hello List, > >> > >> oversudden i can not mount a specific rbd device anymore: > >> > >> root@proxmox-backup:~# rbd map backup-proxmox/cluster5 -k > >> /etc/ceph/ceph.client.admin.keyring > >> /dev/rbd0 > >> > >> root@proxmox-backup:~# mount /dev/rbd0 /mnt/backup-cluster5/ > >> (just never times out) > >> > >> Any idea how to debug that mount? Tcpdump does show some active traffic. > >> > >> Cheers, > >> Michael > >> _______________________________________________ > >> ceph-users mailing list -- ceph-users@xxxxxxx > >> To unsubscribe send an email to ceph-users-leave@xxxxxxx > > > > > > > > Have you checked the status of the cluster (ceph -s)? Are there any OSD > issues, network problems (can you ping your MONs, OSD hosts)? Check your > syslog on the client for any timeout entries. That should be a good start, > and give you some diagnostic info. > > > > -- > > Alex Gorbachev > > ISS/Storcium > _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx