After calamari installation osd start failed

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Title: After Calama

Hi,


after i have installed calamari,

ceph shows me following error when i change/reinstall add a osd.0.


Traceback (most recent call last):
  File "/usr/bin/calamari-crush-location", line 86, in <module>
    sys.exit(main())
  File "/usr/bin/calamari-crush-location", line 83, in main
    print get_osd_location(args.id)
  File "/usr/bin/calamari-crush-location", line 47, in get_osd_location
    last_location = get_last_crush_location(osd_id)
  File "/usr/bin/calamari-crush-location", line 27, in get_last_crush_location
    proc = Popen(c, stdout=PIPE, stderr=PIPE)
  File "/usr/lib/python2.7/subprocess.py", line 679, in __init__
    errread, errwrite)
  File "/usr/lib/python2.7/subprocess.py", line 1259, in _execute_child
    raise child_exception
OSError: [Errno 2] No such file or directory
Invalid command:  saw 0 of args(<string(goodchars [A-Za-z0-9-_.=])>) [<string(goodchars [A-Za-z0-9-_.=])>...], expected at least 1
osd crush create-or-move <osdname (id|osd.id)> <float[0.0-]> <args> [<args>...] :  create entry or move existing entry for <name> <weight> at/to location <args>
Error EINVAL: invalid command
failed: 'timeout 30 /usr/bin/ceph -c /etc/ceph/ceph.conf --name=osd.0 --keyring=/var/lib/ceph/osd/ceph-0/keyring osd crush create-or-move -- 0 0.46 '


[global]
osd_crush_location_hook = /usr/bin/calamari-crush-location
fsid = 78227661-3a1b-4e56-addc-c2a272933ac2
mon_initial_members = ceph01
mon_host = 10.0.0.20,10.0.0.21,10.0.0.22
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
filestore_xattr_use_omap = true
filestore_op_threads = 32
public_network = 10.0.0.0/24
cluster_network = 10.0.1.0/24
osd_pool_default_size = 3
osd_pool_default_min_size = 1
osd_pool_default_pg_num = 4096
osd_pool_default_pgp_num = 4096
osd_max_write_size = 200
osd_map_cache_size = 1024
osd_map_cache_bl_size = 128
osd_recovery_op_priority = 1
osd_max_recovery_max_active = 1
osd_recovery_max_backfills = 1
osd_op_threads = 32
osd_disk_threads = 8


After i have recreate osd.0


3    0.27            osd.3    up    1   
6    0.55            osd.6    up    1   
9    0.55            osd.9    up    1   
12    0.27            osd.12    up    1   
15    0.27            osd.15    up    1   
18    0.27            osd.18    up    1   
21    0.06999            osd.21    up    1   
24    0.27            osd.24    up    1   
27    0.27            osd.27    up    1   
-3    3.18        host ceph02
4    0.55            osd.4    up    1   
7    0.55            osd.7    up    1   
10    0.55            osd.10    up    1   
13    0.27            osd.13    up    1   
1    0.11            osd.1    up    1   
16    0.27            osd.16    up    1   
19    0.27            osd.19    up    1   
22    0.06999            osd.22    up    1   
25    0.27            osd.25    up    1   
28    0.27            osd.28    up    1   
-4    2.76        host ceph03
2    0.11            osd.2    up    1   
5    0.55            osd.5    up    1   
8    0.55            osd.8    up    1   
11    0.13            osd.11    up    1   
14    0.27            osd.14    up    1   
17    0.27            osd.17    up    1   
20    0.27            osd.20    up    1   
23    0.06999            osd.23    up    1   
26    0.27            osd.26    up    1   
29    0.27            osd.29    up    1   
0    0    osd.0    down    0   


Does anybody have an idea how can i solve this??


thanks

cheers


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux