Hi Lo?c, It seams there is another error in the documentation at (http://ceph.com/docs/argonaut/init/stop-cluster/) I believe sudo service -a ceph stop Should probably read sudo service ceph -a stop Cheers On 19 Sep 2014, at 6:33 pm, Loic Dachary <loic at dachary.org> wrote: > Hi, > > The documentation indeed contains an example that does not work. This should fix it : https://github.com/dachary/ceph/commit/be97b7d5b89d7021f71695b4c1b78830bad4dab6 > > Cheers > > On 19/09/2014 08:06, Piers Dawson-Damer wrote: >> Has the command for manually starting and stopping OSDs changed? >> >> The documentation for troubleshooting OSDs (http://docs.ceph.com/docs/master/rados/troubleshooting/troubleshooting-osd/) mentions restarting OSDs with the command; >> >> ceph osd start osd.{num} >> >> Yet I find, using Firefly 0.80.5 >> >> piers at sol:/etc/ceph$ ceph osd start osd.1 >> no valid command found; 10 closest matches: >> osd tier remove <poolname> <poolname> >> osd tier cache-mode <poolname> none|writeback|forward|readonly >> osd thrash <int[0-]> >> osd tier add <poolname> <poolname> {--force-nonempty} >> osd pool stats {<name>} >> osd reweight-by-utilization {<int[100-]>} >> osd pool set <poolname> size|min_size|crash_replay_interval|pg_num|pgp_num|crush_ruleset|hashpspool|hit_set_type|hit_set_period|hit_set_count|hit_set_fpp|debug_fake_ec_pool|target_max_bytes|target_max_objects|cache_target_dirty_ratio|cache_target_full_ratio|cache_min_flush_age|cache_min_evict_age|auid <val> {--yes-i-really-mean-it} >> osd pool set-quota <poolname> max_objects|max_bytes <val> >> osd pool rename <poolname> <poolname> >> osd pool get <poolname> size|min_size|crash_replay_interval|pg_num|pgp_num|crush_ruleset|hit_set_type|hit_set_period|hit_set_count|hit_set_fpp|auid >> Error EINVAL: invalid command >> >> >> >> >> _______________________________________________ >> ceph-users mailing list >> ceph-users at lists.ceph.com >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com >> > > -- > Lo?c Dachary, Artisan Logiciel Libre > -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20140922/237aa2d1/attachment.htm>