Hi Alfredo,
Now all works fine. Thank you!
Hi Roman,
This was a recent change in ceph-deploy to enable Ceph services on
CentOS/RHEL/Fedora distros after deploying a daemon (an OSD in your
case).
There was an issue where the remote connection was closed before being
able to enable a service when creating an OSD and this just got fixed
yesterday (ticket: http://tracker.ceph.com/issues/9698)
This should not affect your OSD deployment, a new ceph-deploy release
should be coming up that fixes this.
On Thu, Oct 9, 2014 at 4:55 AM, Roman <intrasky@xxxxxxxxx> wrote:
Hi All,
Anybody know how to fix ceph-deploy problem like this?
[root@ceph01 ceph-new-2]# ceph-deploy osd activate ceph03:/var/local/osd0
ceph04:/var/local/osd1
[ceph_deploy.conf][DEBUG ] found configuration file at:
/root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.17): /usr/bin/ceph-deploy osd
activate ceph03:/var/local/osd0 ceph04:/var/local/osd1
[ceph_deploy.osd][DEBUG ] Activating cluster ceph disks
ceph03:/var/local/osd0: ceph04:/var/local/osd1:
[ceph03][DEBUG ] connected to host: ceph03
[ceph03][DEBUG ] detect platform information from remote host
[ceph03][DEBUG ] detect machine type
[ceph_deploy.osd][INFO ] Distro info: CentOS 6.5 Final
[ceph_deploy.osd][DEBUG ] activating host ceph03 disk /var/local/osd0
[ceph_deploy.osd][DEBUG ] will use init type: sysvinit
[ceph03][INFO ] Running command: ceph-disk -v activate --mark-init sysvinit
--mount /var/local/osd0
[ceph03][DEBUG ] === osd.0 ===
[ceph03][DEBUG ] Starting Ceph osd.0 on ceph03...already running
[ceph03][WARNIN] DEBUG:ceph-disk:Cluster uuid is
f948a85c-cc63-498e-908b-d461085538dd
[ceph03][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd
--cluster=ceph --show-config-value=fsid
[ceph03][WARNIN] DEBUG:ceph-disk:Cluster name is ceph
[ceph03][WARNIN] DEBUG:ceph-disk:OSD uuid is
e880f969-c5a6-4fdc-ba68-abdd4db6a97d
[ceph03][WARNIN] DEBUG:ceph-disk:OSD id is 0
[ceph03][WARNIN] DEBUG:ceph-disk:Marking with init system sysvinit
[ceph03][WARNIN] DEBUG:ceph-disk:ceph osd.0 data dir is ready at
/var/local/osd0
[ceph03][WARNIN] DEBUG:ceph-disk:Starting ceph osd.0...
[ceph03][WARNIN] INFO:ceph-disk:Running command: /sbin/service ceph
--cluster ceph start osd.0
[ceph03][INFO ] checking OSD status...
[ceph03][INFO ] Running command: ceph --cluster=ceph osd stat --format=json
[ceph_deploy][ERROR ] Traceback (most recent call last):
[ceph_deploy][ERROR ] File
"/usr/lib/python2.6/site-packages/ceph_deploy/util/decorators.py", line 69,
in newfunc
[ceph_deploy][ERROR ] return f(*a, **kw)
[ceph_deploy][ERROR ] File
"/usr/lib/python2.6/site-packages/ceph_deploy/cli.py", line 160, in _main
[ceph_deploy][ERROR ] return args.func(args)
[ceph_deploy][ERROR ] File
"/usr/lib/python2.6/site-packages/ceph_deploy/osd.py", line 603, in osd
[ceph_deploy][ERROR ] activate(args, cfg)
[ceph_deploy][ERROR ] File
"/usr/lib/python2.6/site-packages/ceph_deploy/osd.py", line 387, in activate
[ceph_deploy][ERROR ] system.enable_service(distro.conn)
[ceph_deploy][ERROR ] File
"/usr/lib/python2.6/site-packages/ceph_deploy/util/system.py", line 41, in
enable_service
[ceph_deploy][ERROR ] if is_systemd(conn):
[ceph_deploy][ERROR ] File
"/usr/lib/python2.6/site-packages/ceph_deploy/util/system.py", line 30, in
is_systemd
[ceph_deploy][ERROR ] '/proc/1/comm'
[ceph_deploy][ERROR ] File
"/usr/lib/python2.6/site-packages/ceph_deploy/lib/vendor/remoto/connection.py",
line 98, in wrapper
[ceph_deploy][ERROR ] self.channel.send("%s(%s)" % (name, arguments))
[ceph_deploy][ERROR ] File
"/usr/lib/python2.6/site-packages/ceph_deploy/lib/vendor/remoto/lib/vendor/execnet/gateway_base.py",
line 684, in send
[ceph_deploy][ERROR ] self.gateway._send(Message.CHANNEL_DATA, self.id,
dumps_internal(item))
[ceph_deploy][ERROR ] File
"/usr/lib/python2.6/site-packages/ceph_deploy/lib/vendor/remoto/lib/vendor/execnet/gateway_base.py",
line 953, in _send
[ceph_deploy][ERROR ] raise IOError("cannot send (already closed?)")
[ceph_deploy][ERROR ] IOError: cannot send (already closed?)
[ceph_deploy][ERROR ]
Thanks,
Roman
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com