Re: Ceph OSDs are down and cannot be started

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Holy mackerel  :-)   The culprit is OSD automount, which is not working!

Finally I came across
  https://www.mail-archive.com/ceph-users@xxxxxxxxxxxxxx/msg16597.html

After mounting and starting the 3 OSDs manually (here shown for the first
OSD)
# mount /dev/sdb2 /var/lib/ceph/osd/ceph-0
# start ceph-osd id=0

and witnessing that XFS went through a recovery:

Kernel:
Jul  8 14:23:02 sto-vm21 kernel: [  580.451102] SGI XFS with ACLs, security
attributes, realtime, large block/inode numbers, no debug enabled
Jul  8 14:23:03 sto-vm21 kernel: [  580.457043] XFS (sdb2): Mounting V4
Filesystem
Jul  8 14:23:03 sto-vm21 kernel: [  580.540095] XFS (sdb2): Starting
recovery (logdev: internal)
Jul  8 14:23:03 sto-vm21 kernel: [  580.554778] XFS (sdb2): Ending recovery
(logdev: internal)

the cluster is healthy again, but only until the next OSD reboot ...  so
I'll need a persistent solution ...

So the bug reported in
    http://tracker.ceph.com/issues/5194
    Bug #5194: udev does not start osd after reboot on wheezy or el6 or
fedora

should be re-opened, and IMHO the whole OSD automount business should be
carefully documented on ceph.com.

B.t.w., others apparently have the same problem, see "Set the right GPT
type GUIDs on OSD and journal partitions for udev automount rules" at

https://www.mirantis.com/openstack-portal/external-tutorials/ceph-mirantis-openstack-full-transcript/


It would be great to fix this problem for good.

Thanks,
- Fredy




From:	Fredy Neeser <nfd@xxxxxxxxxxxxxx>
To:	Somnath Roy <Somnath.Roy@xxxxxxxxxxx>,
            "ceph-users@xxxxxxxxxxxxxx" <ceph-users@xxxxxxxxxxxxxx>
Date:	07/08/2015 01:21 PM
Subject:	Re:  Ceph OSDs are down and cannot be started
Sent by:	"ceph-users" <ceph-users-bounces@xxxxxxxxxxxxxx>



Thanks for the tip!   It's weird -- on my first OSD, I get:

$ sudo ceph-osd -i 0 -f
...
2015-07-08 09:29:35.207874 7fbd3507d800 -1  ** ERROR: unable to open OSD
superblock on /var/lib/ceph/osd/ceph-0: (2) No such file or directory

Indeed,   /var/lib/ceph/osd/ceph-0/  is empty!   Same problem on all 3
OSDs ...

OTOH, the MON node has
   $ ls  /var/lib/ceph/mon/ceph-sto-vm20/
   done  keyring  store.db  upstart

On 19. June, there must have been a bad event on all my OSD VMs, causing
them to lose all files in  /var/lib/ceph/osd/ceph-*/   :
I can see from the daily /var/log/ceph logs that scrubbing took place twice
per day  until 19. June, 12am.   After that, no further log messages were
written.

>From an older kernel log file,  I can see that the VMs were rebooted on 19.
June, 6:40pm,  but I can't see any errors related to sdb, where the OSDs
are located.


Wait a minute ...  sdb2 (having XFS) no longer gets mounted in the most
recent kernel log.  Could this be related to
  http://tracker.ceph.com/issues/5194
  Bug #5194: udev does not start osd after reboot on wheezy or el6 or
fedora
?

Here are the disks on my first OSD:

root@sto-vm21:/home/nfd# /usr/sbin/ceph-disk list
/dev/sda :
 /dev/sda1 other, ext4, mounted on /
 /dev/sda2 other, 0x5
 /dev/sda5 swap, swap
/dev/sdb :
 /dev/sdb1 other, 0fc63daf-8483-4772-8e79-3d69d8477de4
 /dev/sdb2 other, xfs
/dev/sr0 other, iso9660

Sage's tip
  # partprobe /dev/sdb
has no effect.

Now I suspect some weird udev problem.

Thanks,
- Fredy




From:		 Somnath Roy <Somnath.Roy@xxxxxxxxxxx>
To:		 Fredy Neeser <nfd@xxxxxxxxxxxxxx>, "ceph-users@xxxxxxxxxxxxxx"
            <ceph-users@xxxxxxxxxxxxxx>
Date:		 07/07/2015 07:49 PM
Subject:		 RE:  Ceph OSDs are down and cannot be
started



Run :
'ceph-osd -i 0 -f' in a console and see what is the output.

Thanks & Regards
Somnath



-----Original Message-----
From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of
Fredy Neeser
Sent: Tuesday, July 07, 2015 9:15 AM
To: ceph-users@xxxxxxxxxxxxxx
Subject:  Ceph OSDs are down and cannot be started


Hi,

I had a working Ceph Hammer test setup with 3 OSDs and 1 MON (running on
VMs), and RBD was working fine.

The setup was not touched for two weeks (also no I/O activity), and when I
looked again, the cluster was in a bad state:

On the MON node (sto-vm20):
$ ceph health
HEALTH_WARN 72 pgs stale; 72 pgs stuck stale; 3/3 in osds are down

$ ceph health detail
HEALTH_WARN 72 pgs stale; 72 pgs stuck stale; 3/3 in osds are down pg 0.22
is stuck stale for 1457679.263525, current state stale+active
+clean, last acting [2,1,0]
pg 0.21 is stuck stale for 1457679.263529, current state stale+active
+clean, last acting [1,2,0]
pg 0.20 is stuck stale for 1457679.263531, current state stale+active
+clean, last acting [1,0,2]
pg 0.1f is stuck stale for 1457679.263533, current state stale+active
+clean, last acting [2,0,1]
...
pg 0.24 is stuck stale for 1457679.263625, current state stale+active
+clean, last acting [2,0,1]
pg 0.23 is stuck stale for 1457679.263627, current state stale+active
+clean, last acting [1,2,0]
osd.0 is down since epoch 16, last address 9.4.68.111:6800/1658
osd.1 is down since epoch 16, last address 9.4.68.112:6800/1659
osd.2 is down since epoch 16, last address 9.4.68.113:6800/1654

On the OSD nodes (sto-vm21, sto-vm22, sto-vm23), no Ceph daemon is running:
$ ps -ef | egrep "ceph|osd|rados"
(returns nothing)

I rebooted the OSDs  as well as the MON, but still only the ceph-mon daemon
is running on the MON node.

I tried to start the OSDs manually by executing $ sudo /etc/init.d/ceph
start osd on the OSD nodes, but I saw neither an error message nor alogfile
update.

On the OSD nodes, the log files in /var/log/ceph have no longer been
updated since the failure event.


What is strange is that the OSDs no longer have any admin socket files
(which should normally be in /run/ceph), whereas the MON node does have an
admin socket:
$ ls -la /run/ceph
srwxr-xr-x  1 root root   0 Jul  7 15:27 ceph-mon.sto-vm20.asok

This looks very similar to
http://tracker.ceph.com/issues/7188
Bug #7188: Admin socket files are lost on log rotation calling initctl
reload (ubuntu 13.04 only)

Any ideas how to restart / recover the OSDs are much appreciated.
How can I start the OSD daemon(s) such that I can see any errors?

Thanks,
- Fredy

PS: The Ceph setup is on  Ubuntu 14.04.2 LTS (GNU/Linux 3.16.0-41-generic
x86_64)

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

________________________________

PLEASE NOTE: The information contained in this electronic mail message is
intended only for the use of the designated recipient(s) named above. If
the reader of this message is not the intended recipient, you are hereby
notified that you have received this message in error and that any review,
dissemination, distribution, or copying of this message is strictly
prohibited. If you have received this communication in error, please notify
the sender by telephone or e-mail (as shown above) immediately and destroy
any and all copies of this message in your possession (whether hard copies
or electronically stored copies).




_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux