Re: How to let osd up which is down

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I use the container to run Ceph Cluster. There is no new osd.<number>.log in /var/log/ceph/<fsid>/. There is only ceph-volume.log. What config should I set to enable the log?
There is still osd.3 is down. I check it map to sdb. But there is no /dev/dm-* maped to /dev/sd*.



The Ceph Cluster message.
[root@GHui ~]# ceph osd df
Inferring fsid ea39c6f0-fb3b-11eb-9f7a-b8cef60b8e48
ID  CLASS  WEIGHT   REWEIGHT  SIZE     RAW USE  DATA     OMAP     META     AVAIL    %USE   VAR   PGS  STATUS
 0    ssd  1.74660   1.00000  1.7 TiB  134 GiB  134 GiB   16 MiB  478 MiB  1.6 TiB   7.50  0.50  243      up
 1    ssd  1.74660   1.00000  1.7 TiB  344 GiB  343 GiB  1.9 MiB  778 MiB  1.4 TiB  19.21  1.27  338      up
 4    ssd  0.36389   1.00000  373 GiB   63 GiB   63 GiB  2.7 MiB  160 MiB  309 GiB  17.00  1.12   67      up
 2    ssd  1.74660   1.00000  1.7 TiB  320 GiB  319 GiB   10 MiB  808 MiB  1.4 TiB  17.90  1.18  351      up
 3    ssd  1.74660   1.00000      0 B      0 B      0 B      0 B      0 B      0 B      0     0    0    down
 5    ssd  0.36389   1.00000  373 GiB   64 GiB   64 GiB  634 KiB  153 MiB  309 GiB  17.15  1.13   63      up
                       TOTAL  6.0 TiB  925 GiB  923 GiB   32 MiB  2.3 GiB  5.1 TiB  15.14
[root@GHui ~]# ceph  device ls
DEVICE                                  HOST:DEV        DAEMONS     WEAR  LIFE EXPECTANCY
INTEL_SSDSC2KB019T8_PHYF102600551P9DGN  GHui:sdb      osd.3



------------------ Original ------------------
From: "Janne Johansson" <icepic.dz@xxxxxxxxx>;
Date: Tue, Nov 23, 2021 03:17 PM
To: "GHui"<ugiwgh@xxxxxx>;
Cc: "ceph-users"<ceph-users@xxxxxxx>;
Subject:  Re: How to let osd up which is down

Den tis 23 nov. 2021 kl 05:56 skrev GHui <ugiwgh@xxxxxx>:
>
> I use "systemctl start/stop ceph.target" to start and stop Ceph Cluster. Maybe this is problem. Because of I restart the computer. The osd is all up.
> Is there any way to safe restart Ceph Cluster?

That is how you stop and start all ceph services on one host, yes.
I don't know if you did have the issue I described, I just noticed
that it looked very much like it.

Read the separate osd logs in /var/log/ceph/ceph-osd.<number>.log to
see why they do not start in your case.

--
May the most significant bit of your life be positive.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux