Is it possible that something else was mounted there? Or is it possible nothing was mounted there? That would explain such behaviour...
Jan
No, it really was in the cluster. Before reboot cluster had HEALTH_OK. Though now I've checked `current` directory and it doesn't contain any data:
root@staging-coreos-1:/var/lib/ceph/osd/ceph-0# ls current commit_op_seq meta nosnap omap
while other OSDs do. It really looks like something was broken on reboot, probably during container start, so it's not really related to Ceph. I'll go with OSD recreation.
Thank you.
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxxhttp://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
|
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com