Ceph New OSD cannot be started

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

I have a cluster of 3 debian ceph machines running version:

ceph version 0.80.1 (a38fe1169b6d2ac98b427334c12d7cf81f809b74)

A new disk was added to one node but it does not want to start it. I have tried everything like removing and readding the disk any times.

The current ceph osd tree:

# id	weight	type name	up/down	reweight
-1	29	root default
-2	9.06		host store1
0	1.81			osd.0	up	1
5	1.81			osd.5	up	1
6	1.81			osd.6	up	1
9	3.63			osd.9	up	1
-3	9.06		host store3
1	1.81			osd.1	up	1
2	1.81			osd.2	up	1
8	1.81			osd.8	up	1
11	3.63			osd.11	up	1
-4	10.88		host store2
3	1.81			osd.3	up	1
7	1.81			osd.7	up	1
10	3.63			osd.10	up	1
4	3.63			osd.4	down	0	< problem is with this disk

All the disks are 4TB.

/etc/init.d/ceph start osd.4
=== osd.4 ===
create-or-move updated item name 'osd.4' weight 3.63 at location {host=store2,root=default} to crush map
Starting Ceph osd.4 on store2...
starting osd.4 at :/0 osd_data /var/lib/ceph/osd/ceph-4 /var/lib/ceph/osd/ceph-4/journal

If I look into the disk, it did create the basic ceph directory but it does not use it:

/dev/sdc1                3.7T  5.1G  3.7T   1% /var/lib/ceph/osd/ceph-4


The ceph health status:

    cluster <ID>
health HEALTH_WARN 4 pgs degraded; 113 pgs stuck unclean; recovery 45/5084377 objects degraded (0.001%); 1 near full osd(s) monmap e3: 3 mons at {cephmon1=IP1:6789/0,store2=IP2:6789/0,store3=IP3:6789/0}, election epoch 6980, quorum 0,1,2 store2,store3,cephmon1
     osdmap e5709: 12 osds: 11 up, 11 in
      pgmap v79160817: 1216 pgs, 5 pools, 9763 GB data, 2481 kobjects
            19644 GB used, 6370 GB / 26014 GB avail
            45/5084377 objects degraded (0.001%)
                1103 active+clean
                   4 active+degraded
                 109 active+remapped
  client io 21341 B/s rd, 477 kB/s wr, 118 op/s


Any idea how to fix this? If it possible without upgrade. I don't want to upgrade this cluster to other version ever. It does it's job as it should.


Thank you,
Claude
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux