This is a similar issue that we ran into, the root cause was that ceph-deploy doesn't set the partition type guid (that is used to auto activate the volume) on an existing partition. Setting this beforehand while pre-creating the partition is a must or you have you put entries in fstab.
On Mon, Dec 9, 2013 at 8:11 AM, Alfredo Deza <alfredo.deza@xxxxxxxxxxx> wrote:
That is something I would not expect having deployed with ceph-deploy.On Mon, Dec 9, 2013 at 6:49 AM, Matthew Walster <matthew@xxxxxxxxxxx> wrote:
> I'm having a play with ceph-deploy after some time away from it (mainly
> relying on the puppet modules).
>
> With a test setup of only two debian testing servers, I do the following:
>
> ceph-deploy new host1 host2
> ceph-deploy install host1 host2 (installs emperor)
> ceph-deploy mon create host1 host2
> ceph-deploy osd prepare host1:/dev/sda4 host2:/dev/sda4
> ceph-deploy osd activate host1:/dev/sda4 host2:/dev/sda4
> ceph-deploy mds create host1 host2
>
> Everything is running fine -- copy some files into CephFS, everything it
> looking great.
>
> host1: /etc/init.d/ceph stop osd
>
> Still fine.
>
> host1: /etc/init.d/ceph stop mds
>
> Fails over to the standby mds after a few seconds. Little outage, but to be
> expected. Everything fine.
>
> host1: /etc/init.d/ceph start osd
> host1: /etc/init.d/ceph start mds
>
> Everything recovers, everything is fine.
>
> Now, let's do something drastic:
>
> host1: reboot
> host2: reboot
>
> Both hosts come back up, but the mds never recovers -- it always says it is
> replaying.
ceph-deploy doesn't create specific entries for mon/mds/osd/'s I think
>
> On closer inspection, host2's osd never came back into action. Doing:
>
> ceph-deploy osd activate host2:/dev/sda4 fixed the issue, and the mds
> recovered, as well as the osd now reporting both "up" and "in".
>
> Is there something obvious I'm missing? The ceph.conf seemed remarkably
> empty, do I have to re-deploy the configuration file to the monitors or
> similar?
it barely adds something in the global section for the mon initial
members
So that is actually normal ceph-deploy behavior.
Are you able to reproduce this in a different host from scratch? I
I've never noticed a problem with puppet deployed hosts, but that
> manually writes out the ceph.conf as part of the puppet run.
just tried on a CentOS 6.4 box and everything came back after a
reboot.
It would also be very helpful to have all the output from ceph-deploy
as you try to reproduce this behavior.
> _______________________________________________
>
> Many thanks in advance,
>
> Matthew Walster
>
>
>
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
If google has done it, Google did it right!
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com