Re: How do I start ceph jewel in CentOS?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I think this is actually fixed in master, probaby not yet backported
to jewel,  systemctl status should list ceph services unless there is
some other issue with your node

ex output:

           └─system.slice
             ├─system-ceph\x2dosd.slice
             │ └─ceph-osd@0.service
             │   └─22652 /usr/bin/ceph-osd -f --cluster ceph --id 0
--setuser ceph --setgroup ceph



This is on latest branch though


[ubuntu@mira078 cd]$ sudo systemctl status ceph-osd@0.service
● ceph-osd@0.service - Ceph object storage daemon
   Loaded: loaded (/usr/lib/systemd/system/ceph-osd@.service; enabled;
vendor preset: disabled)
   Active: active (running) since Wed 2016-05-04 16:57:37 UTC; 4min 22s ago
  Process: 23074 ExecStartPre=/usr/lib/ceph/ceph-osd-prestart.sh
--cluster ${CLUSTER} --id %i (code=exited, status=0/SUCCESS)
 Main PID: 23125 (ceph-osd)
   CGroup: /system.slice/system-ceph\x2dosd.slice/ceph-osd@0.service
           └─23125 /usr/bin/ceph-osd -f --cluster ceph --id 0
--setuser ceph --setgroup ceph

May 04 16:57:37 mira078 systemd[1]: Starting Ceph object storage daemon...
May 04 16:57:37 mira078 ceph-osd-prestart.sh[23074]: create-or-move
updated item name 'osd.0' weight 0.9044 at location
{host=mira078,root=default} to crush map
May 04 16:57:37 mira078 systemd[1]: Started Ceph object storage daemon.
May 04 16:57:37 mira078 ceph-osd[23125]: starting osd.0 at :/0
osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
May 04 16:57:37 mira078 ceph-osd[23125]: 2016-05-04 16:57:37.892615
7f110eeff800 -1 osd.0 9 log_to_monitors {default=true}
[ubuntu@mira078 cd]$
[ubuntu@mira078 cd]$
[ubuntu@mira078 cd]$ sudo systemctl stop ceph-osd@0.service
[ubuntu@mira078 cd]$
[ubuntu@mira078 cd]$
[ubuntu@mira078 cd]$ sudo systemctl status ceph-osd@0.service
● ceph-osd@0.service - Ceph object storage daemon
   Loaded: loaded (/usr/lib/systemd/system/ceph-osd@.service; enabled;
vendor preset: disabled)
   Active: inactive (dead) since Wed 2016-05-04 17:02:09 UTC; 2s ago
  Process: 23125 ExecStart=/usr/bin/ceph-osd -f --cluster ${CLUSTER}
--id %i --setuser ceph --setgroup ceph (code=exited, status=0/SUCCESS)
  Process: 23074 ExecStartPre=/usr/lib/ceph/ceph-osd-prestart.sh
--cluster ${CLUSTER} --id %i (code=exited, status=0/SUCCESS)
 Main PID: 23125 (code=exited, status=0/SUCCESS)

May 04 16:57:37 mira078 systemd[1]: Starting Ceph object storage daemon...
May 04 16:57:37 mira078 ceph-osd-prestart.sh[23074]: create-or-move
updated item name 'osd.0' weight 0.9044 at location
{host=mira078,root=default} to crush map
May 04 16:57:37 mira078 systemd[1]: Started Ceph object storage daemon.
May 04 16:57:37 mira078 ceph-osd[23125]: starting osd.0 at :/0
osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
May 04 16:57:37 mira078 ceph-osd[23125]: 2016-05-04 16:57:37.892615
7f110eeff800 -1 osd.0 9 log_to_monitors {default=true}
May 04 17:02:06 mira078 systemd[1]: Stopping Ceph object storage daemon...
May 04 17:02:06 mira078 ceph-osd[23125]: 2016-05-04 17:02:06.972780
7f10e819b700 -1 osd.0 12 *** Got signal Terminated ***
May 04 17:02:07 mira078 ceph-osd[23125]: 2016-05-04 17:02:07.027192
7f10e819b700 -1 osd.0 12 shutdown
May 04 17:02:09 mira078 systemd[1]: Stopped Ceph object storage daemon.
[ubuntu@mira078 cd]$
[ubuntu@mira078 cd]$
[ubuntu@mira078 cd]$ sudo systemctl start ceph-osd@0.service
[ubuntu@mira078 cd]$
[ubuntu@mira078 cd]$
[ubuntu@mira078 cd]$ sudo systemctl status ceph-osd@0.service
● ceph-osd@0.service - Ceph object storage daemon
   Loaded: loaded (/usr/lib/systemd/system/ceph-osd@.service; enabled;
vendor preset: disabled)
   Active: active (running) since Wed 2016-05-04 17:02:19 UTC; 4s ago
  Process: 23283 ExecStartPre=/usr/lib/ceph/ceph-osd-prestart.sh
--cluster ${CLUSTER} --id %i (code=exited, status=0/SUCCESS)
 Main PID: 23335 (ceph-osd)
   CGroup: /system.slice/system-ceph\x2dosd.slice/ceph-osd@0.service
           └─23335 /usr/bin/ceph-osd -f --cluster ceph --id 0
--setuser ceph --setgroup ceph

May 04 17:02:18 mira078 systemd[1]: Starting Ceph object storage daemon...
May 04 17:02:19 mira078 ceph-osd-prestart.sh[23283]: create-or-move
updated item name 'osd.0' weight 0.9044 at location
{host=mira078,root=default} to crush map
May 04 17:02:19 mira078 systemd[1]: Started Ceph object storage daemon.
May 04 17:02:19 mira078 ceph-osd[23335]: starting osd.0 at :/0
osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
May 04 17:02:19 mira078 ceph-osd[23335]: 2016-05-04 17:02:19.503305
7fd5731d4800 -1 osd.0 13 log_to_monitors {default=true}
[ubuntu@mira078 cd]$
[ubuntu@mira078 cd]$
[ubuntu@mira078 cd]$ ps -eaf | grep ceph
ceph     22420     1  0 16:55 ?        00:00:00 /usr/bin/ceph-mon -f
--cluster ceph --id mira078 --setuser ceph --setgroup ceph
ceph     23335     1  1 17:02 ?        00:00:00 /usr/bin/ceph-osd -f
--cluster ceph --id 0 --setuser ceph --setgroup ceph

On Wed, May 4, 2016 at 9:58 AM, Michael Kuriger <mk7193@xxxxxx> wrote:
> How are others starting ceph services?  Am I the only person trying to install jewel on CentOS 7?
> Unfortunately, systemctl status does not list any “ceph” services at all.
>
>
>
>
>
>
>
>
>
>
> On 5/4/16, 9:37 AM, "Vasu Kulkarni" <vakulkar@xxxxxxxxxx> wrote:
>
>>sadly there are still some issues with jewel/master branch for centos
>>systemctl service,
>>As a workaround if you run "systemctl status" and look at the top most
>>service name in the ceph-osd service tree and use that to stop/start
>>it should work.
>>
>>
>>On Wed, May 4, 2016 at 9:00 AM, Michael Kuriger <mk7193@xxxxxx> wrote:
>>> I’m running CentOS 7.2.  I upgraded one server from hammer to jewel.   I
>>> cannot get ceph to start using these new systems scripts.  Can anyone help?
>>>
>>> I tried to enable ceph-osd@.service by creating symlinks manually.
>>>
>>> # systemctl list-unit-files|grep ceph
>>>
>>> ceph-create-keys@.service              static
>>>
>>> ceph-disk@.service                     static
>>>
>>> ceph-mds@.service                      disabled
>>>
>>> ceph-mon@.service                      disabled
>>>
>>> ceph-osd@.service                      enabled
>>>
>>> ceph-mds.target                        disabled
>>>
>>> ceph-mon.target                        disabled
>>>
>>> ceph-osd.target                        enabled
>>>
>>> ceph.target                            enabled
>>>
>>>
>>>
>>> # systemctl start ceph.target
>>>
>>>
>>> # systemctl status ceph.target
>>>
>>> ● ceph.target - ceph target allowing to start/stop all ceph*@.service
>>> instances at once
>>>
>>>    Loaded: loaded (/usr/lib/systemd/system/ceph.target; enabled; vendor
>>> preset: disabled)
>>>
>>>    Active: active since Wed 2016-05-04 08:53:30 PDT; 4min 6s ago
>>>
>>>
>>> May 04 08:53:30  systemd[1]: Reached target ceph target allowing to
>>> start/stop all ceph*@.service instances at once.
>>>
>>> May 04 08:53:30  systemd[1]: Starting ceph target allowing to start/stop all
>>> ceph*@.service instances at once.
>>>
>>> May 04 08:57:32  systemd[1]: Reached target ceph target allowing to
>>> start/stop all ceph*@.service instances at once.
>>>
>>>
>>> # systemctl status ceph-osd.target
>>>
>>> ● ceph-osd.target - ceph target allowing to start/stop all ceph-osd@.service
>>> instances at once
>>>
>>>    Loaded: loaded (/usr/lib/systemd/system/ceph-osd.target; enabled; vendor
>>> preset: disabled)
>>>
>>>    Active: active since Wed 2016-05-04 08:53:30 PDT; 4min 20s ago
>>>
>>>
>>> May 04 08:53:30  systemd[1]: Reached target ceph target allowing to
>>> start/stop all ceph-osd@.service instances at once.
>>>
>>> May 04 08:53:30  systemd[1]: Starting ceph target allowing to start/stop all
>>> ceph-osd@.service instances at once.
>>>
>>>
>>> # systemctl status ceph-osd@.service
>>>
>>> Failed to get properties: Unit name ceph-osd@.service is not valid.
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> _______________________________________________
>>> ceph-users mailing list
>>> ceph-users@xxxxxxxxxxxxxx
>>> https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.ceph.com_listinfo.cgi_ceph-2Dusers-2Dceph.com&d=CwIFaQ&c=lXkdEK1PC7UK9oKA-BBSI8p1AamzLOSncm6Vfn0C_UQ&r=CSYA9OS6Qd7fQySI2LDvlQ&m=ha3XvQGcc5Yztz98b7hb8pYQo14dcIiYxfOoMzyUM00&s=VdVOtGV4JQUKyQDDC_QYn1-7wBcSh-eYwx_cCSQWlQk&e=
>>>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux