Hi,
1. How long will ceph continue to run before it starts complaining
about this?
Looks like it is fine for a few hours, ceph osd tree and ceph -s,
seem not to notice anything.
if the OSDs don't have to log anything to disk (which can take quite
some time depending on the log settings) they won't notice. And since
clients communicate directly with the OSDs they won't notice either.
For example if the OSDs on that host start deep-scrubbing, they will
try to log to disk which then would fail.
2. This is still nautilus with majority of ceph-disk and maybe some
ceph-volume disks
What would be a good procedure to try and recover data from this
drive to use on a new os disk?
The OSDs should be usable after OS reinstallation as long as you have
the ceph.conf and the directory structure in /var/lib/ceph/ (incl.
keyrings and permissions). If the OS is back you can run 'ceph-volume
lvm active --all' for the OSDs on lvm basis, the ceph-disk OSDs should
probably recover after running 'ceph-volume simple scan /dev/sdX1' and
then 'ceph-volume simple activate <OSD_ID> <UUID>'.
Zitat von Marc <Marc@xxxxxxxxxxxxxxxxx>:
I have a ceph node that has an os filesystem going into read only
for what ever reason[1].
1. How long will ceph continue to run before it starts complaining
about this?
Looks like it is fine for a few hours, ceph osd tree and ceph -s,
seem not to notice anything.
2. This is still nautilus with majority of ceph-disk and maybe some
ceph-volume disks
What would be a good procedure to try and recover data from this
drive to use on a new os disk?
[1]
Feb 21 14:41:30 kernel: XFS (dm-0): writeback error on sector 11610872
Feb 21 14:41:30 systemd: ceph-mon@c.service failed.
Feb 21 14:41:31 kernel: XFS (dm-0): metadata I/O error: block
0x2ee001 ("xfs_buf_iodone_callback_error") error 121 numblks 1
Feb 21 14:41:31 kernel: XFS (dm-0): metadata I/O error: block
0x5dd5cd ("xlog_iodone") error 121 numblks 64
Feb 21 14:41:31 kernel: XFS (dm-0): Log I/O Error Detected. Shutting
down filesystem
Feb 21 14:41:31 kernel: XFS (dm-0): Please umount the filesystem and
rectify the problem(s)
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx