I'm pretty sure (the OP didn't specify that or other obvious things) that
his is a Debian version that predates the horror that is systemd, given
the Ceph version alone.
I thought about that, but was not really sure since I dived into ceph
only a year ago ;-) In this case I mean the systemv equivalent, of
course ;-)
Zitat von Christian Balzer <chibi@xxxxxxx>:
Hello,
On Thu, 29 Jun 2017 08:53:25 +0000 Eugen Block wrote:
Hi,
what does systemctl status -l ceph-osd@4.service say? Is anything
suspicious in the syslog?
I'm pretty sure (the OP didn't specify that or other obvious things) that
his is a Debian version that predates the horror that is systemd, given
the Ceph version alone.
To the OP, more info, like the Debian version and what Eugen said, logs,
logs, logs.
More below.
Regards,
Eugen
Zitat von Luescher Claude <stargate@xxxxxxxx>:
> Hello,
>
> I have a cluster of 3 debian ceph machines running version:
>
> ceph version 0.80.1 (a38fe1169b6d2ac98b427334c12d7cf81f809b74)
>
> A new disk was added to one node but it does not want to start it. I
> have tried everything like removing and readding the disk any times.
>
How did you do that, manually or with ceph-deploy?
> The current ceph osd tree:
>
> # id weight type name up/down reweight
> -1 29 root default
> -2 9.06 host store1
> 0 1.81 osd.0 up 1
> 5 1.81 osd.5 up 1
> 6 1.81 osd.6 up 1
> 9 3.63 osd.9 up 1
> -3 9.06 host store3
> 1 1.81 osd.1 up 1
> 2 1.81 osd.2 up 1
> 8 1.81 osd.8 up 1
> 11 3.63 osd.11 up 1
> -4 10.88 host store2
> 3 1.81 osd.3 up 1
> 7 1.81 osd.7 up 1
> 10 3.63 osd.10 up 1
> 4 3.63 osd.4 down 0 < problem is with this disk
>
> All the disks are 4TB.
>
Why the different weights then?
> /etc/init.d/ceph start osd.4
> === osd.4 ===
> create-or-move updated item name 'osd.4' weight 3.63 at location
> {host=store2,root=default} to crush map
> Starting Ceph osd.4 on store2...
> starting osd.4 at :/0 osd_data /var/lib/ceph/osd/ceph-4
> /var/lib/ceph/osd/ceph-4/journal
>
> If I look into the disk, it did create the basic ceph directory but
> it does not use it:
>
> /dev/sdc1 3.7T 5.1G 3.7T 1% /var/lib/ceph/osd/ceph-4
>
So the disk got mounted, are all the usual suspects in that directory?
Christian
>
> The ceph health status:
>
> cluster <ID>
> health HEALTH_WARN 4 pgs degraded; 113 pgs stuck unclean;
> recovery 45/5084377 objects degraded (0.001%); 1 near full osd(s)
> monmap e3: 3 mons at
> {cephmon1=IP1:6789/0,store2=IP2:6789/0,store3=IP3:6789/0}, election
> epoch 6980, quorum 0,1,2 store2,store3,cephmon1
> osdmap e5709: 12 osds: 11 up, 11 in
> pgmap v79160817: 1216 pgs, 5 pools, 9763 GB data, 2481 kobjects
> 19644 GB used, 6370 GB / 26014 GB avail
> 45/5084377 objects degraded (0.001%)
> 1103 active+clean
> 4 active+degraded
> 109 active+remapped
> client io 21341 B/s rd, 477 kB/s wr, 118 op/s
>
>
> Any idea how to fix this? If it possible without upgrade. I don't
> want to upgrade this cluster to other version ever. It does it's job
> as it should.
>
>
> Thank you,
> Claude
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Christian Balzer Network/Systems Engineer
chibi@xxxxxxx Rakuten Communications
--
Eugen Block voice : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG fax : +49-40-559 51 77
Postfach 61 03 15
D-22423 Hamburg e-mail : eblock@xxxxxx
Vorsitzende des Aufsichtsrates: Angelika Mozdzen
Sitz und Registergericht: Hamburg, HRB 90934
Vorstand: Jens-U. Mozdzen
USt-IdNr. DE 814 013 983
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com