Ceph newbie questions

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello, 

Problem1:

Ceph health says HEALTH_WARN clock skew detected on mon.b, mon.c

But:

root@hcmonko1:~# date && ssh hcmonko3 date && ssh hcmonko2 date
Thu May 23 18:19:38 CEST 2013
Thu May 23 18:19:38 CEST 2013
Thu May 23 18:19:38 CEST 2013

Seems to be ok, ntp configured. Restarted all the servers, still the same.


Problem 2:

Service ceph -a works seriell, not parallel.
Situation: every osd and mon is offline via service ceph -a stop. When I want to bring them online with "service ceph -a start", it begins with mon a, then mon b, then osd0, osd1 etc.
But the procedure hangs at the point ceph-create keys for the first mon in the row.

root@hcosdko1:~# service ceph -a start
=== mon.a ===
Starting Ceph mon.a on hcmonko1...
Starting ceph-create-keys on hcmonko1...

I get to know via your irc-channel that ceph-create-keys needs the mon quorum to proceed. I think the ceph -a start script is designed to not make it serial, rather it has to do it parallel for all mons+osd.
But in my scenario it hangs at this point: ceph is started on mon1, hangs at the keys because there is no quorum => ceph will not start on mon2,mon3, osd0 and so on. (when I type "service ceph -a start on mon1, it will begin on the host itself, create the keys, proceed with starting ceph on mon2 and can create the keys there, because there is a quorum because ceph is started already on mon1).


Question1
For testing we have only 1 osd node at this moment, there will follow 2 other nodes, perhaps we will add certain osd hdds in one server/node.
when i want to add a node or only an osd.. is it right, that i change the ceph.conf, copy it via ssh to each node(mon,osd,..) and then make ceph -a start to reload the new config, so the new servers,osds,mons etc are activated?
When I wants to add an osd (hdd), I have to type makecephfs --mkfs.. will it destroy the data on the already existing osd-hdds?

Question2
when i have to restart 1 server, i think the ceph mechanism will start replicating the data to the other hosts directly. is it possible to define a server/osd for maintenance mode, so rados does not begin to start replicating the data, only because 1 osd is restarting?


Thank you very much!

Regards
Philipp



--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux