Re: Destroyed Ceph Cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Georg,

I'm not an expert on the monitors, but that's probably where I would start. Take a look at your monitor logs and see if you can get a sense for why one of your monitors is down. Some of the other devs will probably be around later that might know if there are any known issues with recreating the OSDs and missing PGs.

Mark

On 08/16/2013 08:21 AM, Georg Höllrigl wrote:
Hello,

I'm still evaluating ceph - now a test cluster with the 0.67 dumpling.
I've created the setup with ceph-deploy from GIT.
I've recreated a bunch of OSDs, to give them another journal.
There already was some test data on these OSDs.
I've already recreated the missing PGs with "ceph pg force_create_pg"


HEALTH_WARN 192 pgs stuck inactive; 192 pgs stuck unclean; 5 requests are blocked > 32 sec; mds cluster is degraded; 1 mons down, quorum 0,1,2 vvx-ceph-m-01,vvx-ceph-m-02,vvx-ceph-m-03

Any idea how to fix the cluster, besides completley rebuilding the cluster from scratch? What if such a thing happens in a production environment...

The pgs from "ceph pg dump" looks all like creating for some time now:

2.3d 0 0 0 0 0 0 0 creating 2013-08-16 13:43:08.186537 0'0 0:0 [] [] 0'0 0.0000000'0 0.000000

Is there a way to just dump the data, that was on the discarded OSDs?




Kind Regards,
Georg
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux