sysvinit script vs ceph-deploy, chef

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The upstart scripts have also updated the OSD's position in the CRUSH map 
by default on startup.  This is part of the hotplugging capability that 
lets you move drives around between hosts, and also moves the burden away 
from chef and ceph-deploy when adding OSDs.

We forgot to add the parallel capability to the sysvinit script, which is 
what legacy clusters use, and what is used on Debian and RHEL/CentOS.  
That prevents ceph-deploy from fully bringing up OSDs (they are added to 
the cluster but not the CRUSH map).

wip-sysvinit makes it all match the upstart behavior, but old clusters 
will now have the osd position update to be under a node host=`hostname` 
on startup.  By default.  You can turn this off by setting 'osd crush 
update on start = false'.  it's always default to true, but never done 
anything for sysvinit (and legacy mkcephfs)-based clusters.

Alternatively, we could maek ceph-deploy and chef users now enable that 
explicitly, and make the default false.  That means *thsoe* users have to 
change their configs.

Or, we could make it try to magically detect what kind of user you are and 
behave accordingly.  This strikes me as dangerous.

Right now I'm leaning toward a prominent warning that on upgrade, any 
legacy clusters that don't put the osds under a node host=`hostname` need 
to add that optoin to avoid having their CRUSH map modfiied.

Thoughts?
sage

--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux