On 2012. July 14. 07:42:34 Sage Weil wrote: > On Sat, 14 Jul 2012, Xiaopong Tran wrote: > > I'm getting this funny issue. I had setup two test clusters, and > > mkcephfs and the ceph start up script worked just fine. We are > > now ready to go production, we have 6 nodes, with 10 disks > > each, and one osd per disk, with 3 mds and 3 mons. > > > > The script mkcephfs ran without problem, everything was created > > properly. See attached log file. However, when I run > > > > /etc/init.d/ceph start > > > > nothing happens, not even a line of message, not on concole, > > neither in system log. > > > > But can I manually start up each individual osd, mds, and mon. > > This is usually related to the 'host = ...' lines in ceph.conf. They need > to match the output of the `hostname` command in order for that daemon to > be automatically started or stopped. Just a humble remark here: actually the host= setting has to match the hostname *until the first dot*. If your hostname contains a dot, this will not work. This can be useful in a number of cases, like setting the hostname to indicate the group of a hierarchy level in the name so that the administrator can extactly know what he's working on after logging in (I mean hostnames like node<X>.rack<Y> in a datacenter or node<X>.<site> when running a geographically distributed cluster). >From this comes my request: do you think it's possible to change (I mean, in the repository) ceph_common.sh not to cut the output of `hostname` at the first dot? I'm running a cluster with hostnames like that, and now I have to edit that file after each upgrade. -- cc -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html