On 09-11-15 16:25, Gregory Farnum wrote: > The daemons print this in their debug logs on every boot. (There might > be a minimum debug level required, but I think it's at 0!) > -Greg True, but in this case all logs were lost. I had no boot/OS disks available. I got a fresh install of a OS with just a bunch of disks attached. I had to figure out what it was. Long story short: Angry sysadmin wiped almost all the systems, we were lucky that the company powered off all the systems in time. I still found the monitor data, they extracted it from a disk and I got three tarballs of /var/lib/ceph/mon and 46 disks with OSD data. Wido > > On Mon, Nov 9, 2015 at 7:23 AM, Wido den Hollander <wido@xxxxxxxx> wrote: >> Hi, >> >> Recently I got my hands on a Ceph cluster which was pretty damaged due >> to a human error. >> >> I had no ceph.conf nor did I have any original Operating System data. >> >> With just the MON/OSD data I had to rebuild the cluster by manually >> re-writing the ceph.conf and installing Ceph. >> >> The problem was, I didn't know which Ceph version the cluster was >> running, nor did anybody inside the company I was fixing the cluster for. >> >> Would it be an idea if both the MON and OSD daemons would write a file >> called "ceph_version" in their data directory after every succesfull >> startup? >> >> In this case I figured it was probably Firefly or Hammer, so I started >> with Firefly. That failed to start, so I tried Hammer. That worked. But >> it could also have been Giant, I didn't know. >> >> If this is a useful idea I can create a issue in the tracker for it. >> >> Wido >> _______________________________________________ >> ceph-users mailing list >> ceph-users@xxxxxxxxxxxxxx >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com