Re: sysvinit script vs ceph-deploy, chef

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Sage,

Is this referring to something like this in ceph.conf?

[osd.0]
        host = ceph1

[osd.1]
        host = ceph1
...


As long as legacy users use something like that, you are saying they
will be good?  That seems reasonable to me.

To be clear, with such a change to the init script, moving the OSD in
the CRUSH map would only happen if you built the cluster with
ceph-deploy, right?  I thought the information required for that is
added during the disk format/labeling done by ceph-deploy, such as the
Ceph-specific disk label/filesystem ID (probably using the wrong terms
there, haven't looked at it recently).

Regardless, I would be comfortable with a prominent warning on
upgrade, along with the regular notices in release notes, docs, etc.

 - Travis


On Thu, May 2, 2013 at 9:01 PM, Sage Weil <sage@xxxxxxxxxxx> wrote:
> The upstart scripts have also updated the OSD's position in the CRUSH map
> by default on startup.  This is part of the hotplugging capability that
> lets you move drives around between hosts, and also moves the burden away
> from chef and ceph-deploy when adding OSDs.
>
> We forgot to add the parallel capability to the sysvinit script, which is
> what legacy clusters use, and what is used on Debian and RHEL/CentOS.
> That prevents ceph-deploy from fully bringing up OSDs (they are added to
> the cluster but not the CRUSH map).
>
> wip-sysvinit makes it all match the upstart behavior, but old clusters
> will now have the osd position update to be under a node host=`hostname`
> on startup.  By default.  You can turn this off by setting 'osd crush
> update on start = false'.  it's always default to true, but never done
> anything for sysvinit (and legacy mkcephfs)-based clusters.
>
> Alternatively, we could maek ceph-deploy and chef users now enable that
> explicitly, and make the default false.  That means *thsoe* users have to
> change their configs.
>
> Or, we could make it try to magically detect what kind of user you are and
> behave accordingly.  This strikes me as dangerous.
>
> Right now I'm leaning toward a prominent warning that on upgrade, any
> legacy clusters that don't put the osds under a node host=`hostname` need
> to add that optoin to avoid having their CRUSH map modfiied.
>
> Thoughts?
> sage
>
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux