On Thu, Oct 27, 2016 at 9:10 AM, Sage Weil <sage@xxxxxxxxxxxx> wrote: > On Thu, 27 Oct 2016, Zhangzengran wrote: >> >On Thu, 27 Oct 2016, Zhangzengran wrote: >> >> Hi Sage & all: >> >> There is a data disk on ceph host with xfs, it’s EFIs corrupt in an >> >> unexpected power outage.when the host power on again, system(ubuntu) >> >> hang in upstart’s console, we learn that upstart try to do `ceph-disk activate-all` and mount that disk in ceph-osd-all series jobs,but the mount hang indefinite. there is also a S20ceph script called by rc series jobs,and the tty1.conf start on `stoped rc`,so we cant login from console. >> >> >> >> I saw there is a Patch for ceph-disk,it provided a timeout to mkfs.So >> >> I think, do we also need to provide a timeout for mount?because if ceph-disk hang in upstart, no osd daemon could start on that host. >> >> >What version of ubuntu and Ceph is this? Since jewel we are all systemd, and the activate etc jobs are all async systemd tasks, so I don't think this should happen. >> >> Ubuntu is 14.04, Ceph is Hammer 0.94.5 > > Ah. This will all go away once you upgrade to jewel. In the meantime, > you can probably remove the S20 file.. though I don't think it should be > doing anything anyway since you're using upstart. jewel + Ubuntu 16.04, right? -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html