Re: Destroyed Ceph Cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello Mark,
Hello list,


I fixed the monitor issue. There was another monitor, which didn't run any more. I've removed that - now I'm lost with the MDS still replaying it's journal?

root@vvx-ceph-m-02:/var/lib/ceph/mon# ceph health detail
HEALTH_WARN 192 pgs degraded; 192 pgs stuck unclean; mds cluster is degraded
pg 0.3f is stuck unclean since forever, current state active+degraded, last acting [28]
...
pg 2.2 is stuck unclean since forever, current state active+degraded, last acting [37]
pg 2.3d is active+degraded, acting [28]
...
pg 0.10 is active+degraded, acting [35]
pg 2.d is active+degraded, acting [27]
...
pg 0.0 is active+degraded, acting [23]
mds cluster is degraded
mds.vvx-ceph-m-01 at 10.0.0.176:6800/1098 rank 0 is replaying journal



# ceph mds stat
e8: 1/1/1 up {0=vvx-ceph-m-01=up:replay}, 2 up:standby

the logs for mds are empty on all three.

Removing MDS ist still not supported, whe I look at:
http://ceph.com/docs/master/rados/deployment/ceph-deploy-mds/



Georg



On 16.08.2013 16:23, Mark Nelson wrote:
Hi Georg,

I'm not an expert on the monitors, but that's probably where I would
start.  Take a look at your monitor logs and see if you can get a sense
for why one of your monitors is down.  Some of the other devs will
probably be around later that might know if there are any known issues
with recreating the OSDs and missing PGs.

Mark

On 08/16/2013 08:21 AM, Georg Höllrigl wrote:
Hello,

I'm still evaluating ceph - now a test cluster with the 0.67 dumpling.
I've created the setup with ceph-deploy from GIT.
I've recreated a bunch of OSDs, to give them another journal.
There already was some test data on these OSDs.
I've already recreated the missing PGs with "ceph pg force_create_pg"


HEALTH_WARN 192 pgs stuck inactive; 192 pgs stuck unclean; 5 requests
are blocked > 32 sec; mds cluster is degraded; 1 mons down, quorum
0,1,2 vvx-ceph-m-01,vvx-ceph-m-02,vvx-ceph-m-03

Any idea how to fix the cluster, besides completley rebuilding the
cluster from scratch? What if such a thing happens in a production
environment...

The pgs from "ceph pg dump" looks all like creating for some time now:

2.3d    0       0       0       0       0       0       0 creating
      2013-08-16 13:43:08.186537       0'0     0:0 []      [] 0'0
0.0000000'0     0.000000

Is there a way to just dump the data, that was on the discarded OSDs?




Kind Regards,
Georg
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

--
Dipl.-Ing. (FH) Georg Höllrigl
Technik

________________________________________________________________________________

Xidras GmbH
Stockern 47
3744 Stockern
Austria

Tel:     +43 (0) 2983 201 - 30505
Fax:     +43 (0) 2983 201 - 930505
Email:   georg.hoellrigl@xxxxxxxxxx
Web:     http://www.xidras.com

FN 317036 f | Landesgericht Krems | ATU64485024

________________________________________________________________________________

VERTRAULICHE INFORMATIONEN!
Diese eMail enthält vertrauliche Informationen und ist nur für den berechtigten Empfänger bestimmt. Wenn diese eMail nicht für Sie bestimmt ist, bitten wir Sie,
diese eMail an uns zurückzusenden und anschließend auf Ihrem Computer und
Mail-Server zu löschen. Solche eMails und Anlagen dürfen Sie weder nutzen,
noch verarbeiten oder Dritten zugänglich machen, gleich in welcher Form.
Wir danken für Ihre Kooperation!

CONFIDENTIAL!
This email contains confidential information and is intended for the authorised recipient only. If you are not an authorised recipient, please return the email
to us and then delete it from your computer and mail-server. You may neither
use nor edit any such emails including attachments, nor make them accessible
to third parties in any manner whatsoever.
Thank you for your cooperation

________________________________________________________________________________
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux