Re: Down a osd and bring it Up

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks for the reply, the service is still showing as failed. How to bring the osds service up. Ceph osd tree shows all osds as UP.

 [root@Admin ceph]# systemctl restart ceph-osd@osd.2.service
[root@Admin ceph]# systemctl status ceph-osd@osd.2.service
● ceph-osd@osd.2.service - Ceph object storage daemon
   Loaded: loaded (/usr/lib/systemd/system/ceph-osd@.service; enabled; vendor preset: disabled)
   Active: failed (Result: start-limit) since Fri 2016-06-17 14:34:25 IST; 3s ago
  Process: 8112 ExecStart=/usr/bin/ceph-osd -f --cluster ${CLUSTER} --id %i --setuser ceph --setgroup ceph (code=exited, status=1/FAILURE)
  Process: 8071 ExecStartPre=/usr/lib/ceph/ceph-osd-prestart.sh --cluster ${CLUSTER} --id %i (code=exited, status=0/SUCCESS)
 Main PID: 8112 (code=exited, status=1/FAILURE)

Jun 17 14:34:25 Admin ceph-osd[8112]: --debug_ms N      set message debug level (e.g. 1)
Jun 17 14:34:25 Admin systemd[1]: ceph-osd@osd.2.service: main process exited, code=exited, status=...ILURE
Jun 17 14:34:25 Admin systemd[1]: Unit ceph-osd@osd.2.service entered failed state.
Jun 17 14:34:25 Admin systemd[1]: ceph-osd@osd.2.service failed.
Jun 17 14:34:25 Admin ceph-osd[8112]: 2016-06-17 14:34:25.003696 7f3f58664800 -1 must specify '-i #'...mber
Jun 17 14:34:25 Admin systemd[1]: ceph-osd@osd.2.service holdoff time over, scheduling restart.
Jun 17 14:34:25 Admin systemd[1]: start request repeated too quickly for ceph-osd@osd.2.service
Jun 17 14:34:25 Admin systemd[1]: Failed to start Ceph object storage daemon.
Jun 17 14:34:25 Admin systemd[1]: Unit ceph-osd@osd.2.service entered failed state.
Jun 17 14:34:25 Admin systemd[1]: ceph-osd@osd.2.service failed.
Hint: Some lines were ellipsized, use -l to show in full.


[root@Admin ceph]# systemctl -a | grep ceph
  ceph-osd@osd.0.service                                                                                         loaded    inactive dead      Ceph object storage daemon
● ceph-osd@osd.1.service                                                                                         loaded    failed   failed    Ceph object storage daemon
  ceph-osd@osd.10.service                                                                                        loaded    inactive dead      Ceph object storage daemon
  ceph-osd@osd.11.service                                                                                        loaded    inactive dead      Ceph object storage daemon
  ceph-osd@osd.12.service                                                                                        loaded    inactive dead      Ceph object storage daemon
  ceph-osd@osd.13.service                                                                                        loaded    inactive dead      Ceph object storage daemon
  ceph-osd@osd.14.service                                                                                        loaded    inactive dead      Ceph object storage daemon
  ceph-osd@osd.15.service                                                                                        loaded    inactive dead      Ceph object storage daemon
  ceph-osd@osd.16.service                                                                                        loaded    inactive dead      Ceph object storage daemon
  ceph-osd@osd.17.service                                                                                        loaded    inactive dead      Ceph object storage daemon
  ceph-osd@osd.18.service                                                                                        loaded    inactive dead      Ceph object storage daemon
  ceph-osd@osd.19.service                                                                                        loaded    inactive dead      Ceph object storage daemon
● ceph-osd@osd.2.service                                                                                         loaded    failed   failed    Ceph object storage daemon
● ceph-osd@osd.3.service                                                                                         loaded    failed   failed    Ceph object storage daemon
  ceph-osd@osd.4.service                                                                                         loaded    inactive dead      Ceph object storage daemon
  ceph-osd@osd.5.service                                                                                         loaded    inactive dead      Ceph object storage daemon
  ceph-osd@osd.6.service                                                                                         loaded    inactive dead      Ceph object storage daemon
  ceph-osd@osd.7.service                                                                                         loaded    inactive dead      Ceph object storage daemon
  ceph-osd@osd.8.service                                                                                         loaded    inactive dead      Ceph object storage daemon
  ceph-osd@osd.9.service                                                                                         loaded    inactive dead      Ceph object storage daemon


On Thu, Jun 16, 2016 at 8:03 PM, Joshua M. Boniface <joshua@xxxxxxxxxxx> wrote:
RHEL 7.2 and Jewel should be using the systemd unit files by default, so you'd do something like:

> sudo systemctl stop ceph-osd@<OSDID>

and then

> sudo systemctl start ceph-osd@<OSDID>

when you're done.

--
Joshua M. Boniface
Linux System Ærchitect
Sigmentation fault. Core dumped.

On 16/06/16 09:44 AM, Kanchana. P wrote:
>
> Hi,
>
> How can I down a osd and bring it back in RHEL 7.2 with ceph verison 10.2.2
>
> sudo start ceph-osd id=1 fails with “sudo: start: command not found”.
>
> I have 5 osds in each node and i want to down one particular osd (sudo stop ceph-sd id=1 also fails) and see whether replicas are written to other osds without any issues.
>
> Thanks in advance.
>
> –kanchana.
>
>
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux