osd down

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi, All:
        I am installing ceph in 2 node using ceph-deploy. 
          node1:monitor and osd.0,
                ip:192.168.1.11
          node2:osd.1, 
                ip:192.168.1.12
              
        When I've configured node1 as monitor and osd.0, it ok.
        But when I added node2 as osd.1, the ceph osd shows 4 osd, but 2 up and 2 in. 
                [ceph@node1 ~]$ ceph -s
                     cluster b21a2377-4f35-4e63-bc7b-e7ee9db4cbab
                      health HEALTH_WARN
                    too many PGs per OSD (512 > max 300)
                    pool rbd pg_num 1024 > pgp_num 64
             monmap e2: 1 mons at {node1=192.168.1.11:6789/0}
                    election epoch 4, quorum 0 node1
                mgr no daemons active
             osdmap e457: 4 osds: 2 up, 2 in
                    flags sortbitwise,require_jewel_osds,require_kraken_osds
              pgmap v1442: 1024 pgs, 1 pools, 0 bytes data, 0 objects
                    10335 MB used, 1852 GB / 1862 GB avail
                        1024 active+clean

        But I found that ceph-osd@1.service is active and running in node2.
            [ceph@node2 ~]$ sudo systemctl status ceph-osd@2
                  ¡ñ ceph-osd@2.service - Ceph object storage daemon
                     Loaded: loaded (/usr/lib/systemd/system/ceph-osd@.service; enabled; vendor preset: disabled)
                     Active: active (running) since Sat 2017-04-15 14:18:18 CST; 1h 37min ago
                    Process: 6045 ExecStartPre=/usr/lib/ceph/ceph-osd-prestart.sh --cluster ${CLUSTER} --id %i (code=exited, status=0/SUCCESS)
                   Main PID: 6096 (ceph-osd)
                     CGroup: /system.slice/system-ceph\x2dosd.slice/ceph-osd@2.service
                             ©¸©¤6096 /usr/bin/ceph-osd -f --cluster ceph --id 2 --setuser ceph --setgroup ceph

                  Apr 15 14:18:18 node2 systemd[1]: Starting Ceph object storage daemon...
                  Apr 15 14:18:18 node2 ceph-osd-prestart.sh[6045]: create-or-move updated item name 'osd.2' weight 0.9092 at location {host=node2,root=default}...rush map
                  Apr 15 14:18:18 node2 systemd[1]: Started Ceph object storage daemon.
                  Apr 15 14:18:18 node2 ceph-osd[6096]: starting osd.2 at :/0 osd_data /var/lib/ceph/osd/ceph-2 /var/lib/ceph/osd/ceph-2/journal
                  Apr 15 14:18:18 node2 ceph-osd[6096]: 2017-04-15 14:18:18.772054 7f8afc18e800 -1 journal FileJournal::_open: disabling aio for non-block jour...io anyway
                  Apr 15 14:18:18 node2 ceph-osd[6096]: 2017-04-15 14:18:18.790188 7f8afc18e800 -1 osd.2 447 log_to_monitors {default=true}
                  Hint: Some lines were ellipsized, use -l to show in full.

        And here are some info about this cluster.
         
        ceph osd tree:
         [ceph@node1 ~]$ ceph osd tree
         ID  WEIGHT  TYPE NAME       UP/DOWN REWEIGHT PRIMARY-AFFINITY
         -1 3.63678 root default
         -2 1.81839     host node1
          0 0.90919         osd.0        up  1.00000          1.00000
          1 0.90919         osd.1        up  1.00000          1.00000
          -3 1.81839     host node2
          2 0.90919         osd.2      down        0          1.00000
          3 0.90919         osd.3      down        0          1.00000


        ceph osd stat:
         [ceph@node1 ~]$ ceph osd stat
         osdmap e457: 4 osds: 2 up, 2 in
              flags sortbitwise,require_jewel_osds,require_kraken_osds



         I find a error in log file. 
              cluster [INF] disallowing boot of OSD osd.3 192.168.1.12:6804/5828 because the osdmap requires CEPH_FEATURE_SERVER_KRAKEN but the osd lacks CEPH_FEATURE_SERVER_KRAKEN
           I searched this error in google, but nothing found. 
           Could you help me to fix this error?
           Thanks for your any help.





_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux