Bingo! A lot of people are getting this dreadful GenericErro and Failed to create 1 OSD. Does anyone know why despite /etc/ceph being there on each node? Also,
FYI purgedata on multiple nodes doesn’t work sometime i.e. it says it is uninstalled ceph and removed /etc/ceph from all nodes but they are there on all nodes except the first one (i.e. the first argument to the purgedata command ). Hence sometimes, I have
to issue purgedata to individual nodes. From: ceph-users-bounces@xxxxxxxxxxxxxx [mailto:ceph-users-bounces@xxxxxxxxxxxxxx]
On Behalf Of charles L Pls can somebody help? Im getting this error. ceph@CephAdmin:~$ ceph-deploy osd create server1:sda:/dev/sdj1 [ceph_deploy.cli][INFO ] Invoked (1.3): /usr/bin/ceph-deploy osd create server1:sda:/dev/sdj1 [ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks server1:/dev/sda:/dev/sdj1 [server1][DEBUG ] connected to host: server1 [server1][DEBUG ] detect platform information from remote host [server1][DEBUG ] detect machine type [ceph_deploy.osd][INFO ] Distro info: Ubuntu 12.04 precise [ceph_deploy.osd][DEBUG ] Deploying osd to server1 [server1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf [server1][INFO ] Running command: sudo udevadm trigger --subsystem-match=block --action=""> [ceph_deploy.osd][DEBUG ] Preparing host server1 disk /dev/sda journal /dev/sdj1 activate True [server1][INFO ] Running command: sudo ceph-disk-prepare --fs-type xfs --cluster ceph -- /dev/sda /dev/sdj1 [server1][ERROR ] WARNING:ceph-disk:OSD will not be hot-swappable if journal is not the same device as the osd data [server1][ERROR ] Could not create partition 1 from 34 to 2047 [server1][ERROR ] Error encountered; not saving changes. [server1][ERROR ] ceph-disk: Error: Command '['sgdisk', '--largest-new=1', '--change-name=1:ceph data', '--partition-guid=1:d3ca8a92-7ba5-412e-abf5-06af958b788d', '--typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be',
'--', '/dev/sda']' returned non-zero exit status 4 [server1][ERROR ] Traceback (most recent call last): [server1][ERROR ] File "/usr/lib/python2.7/dist-packages/ceph_deploy/lib/remoto/process.py", line 68, in run [server1][ERROR ] reporting(conn, result, timeout) [server1][ERROR ] File "/usr/lib/python2.7/dist-packages/ceph_deploy/lib/remoto/log.py", line 13, in reporting [server1][ERROR ] received = result.receive(timeout) [server1][ERROR ] File "/usr/lib/python2.7/dist-packages/ceph_deploy/lib/remoto/lib/execnet/gateway_base.py", line 455, in receive [server1][ERROR ] raise self._getremoteerror() or EOFError() [server1][ERROR ] RemoteError: Traceback (most recent call last): [server1][ERROR ] File "<string>", line 806, in executetask [server1][ERROR ] File "", line 35, in _remote_run [server1][ERROR ] RuntimeError: command returned non-zero exit status: 1 [server1][ERROR ] [server1][ERROR ] [ceph_deploy.osd][ERROR ] Failed to execute command: ceph-disk-prepare --fs-type xfs --cluster ceph -- /dev/sda /dev/sdj1 [ceph_deploy][ERROR ] GenericError: Failed to create 1 OSDs > Date: Thu, 31 Oct 2013 10:55:56 +0000 This message contains information which may be confidential and/or privileged. Unless you are the intended recipient (or authorized to receive for the intended recipient), you may not read, use, copy or disclose to anyone the message or any information contained in the message. If you have received the message in error, please advise the sender by reply e-mail and delete the message and any attachment(s) thereto without retaining any copies. |
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com