Re: Help ! how to recover from total monitor failure in lumnious

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks, I’m downloading it right now

 

-- 

Efficiency is Intelligent Laziness

From: "ceph.novice@xxxxxxxxxxxxxxxx" <ceph.novice@xxxxxxxxxxxxxxxx>
Date: Friday, February 2, 2018 at 12:37 PM
To: "ceph.novice@xxxxxxxxxxxxxxxx" <ceph.novice@xxxxxxxxxxxxxxxx>
Cc: Frank Li <frli@xxxxxxxxxxxxxxxxxxxx>, "ceph-users@xxxxxxxxxxxxxx" <ceph-users@xxxxxxxxxxxxxx>
Subject: Aw: Re: [ceph-users] Help ! how to recover from total monitor failure in lumnious

 

there pick your "DISTRO", klick on the "ID", klick "Repo URL"...

 

Gesendet: Freitag, 02. Februar 2018 um 21:34 Uhr
Von: ceph.novice@xxxxxxxxxxxxxxxx
An: "Frank Li" <frli@xxxxxxxxxxxxxxxxxxxx>
Cc: "ceph-users@xxxxxxxxxxxxxx" <ceph-users@xxxxxxxxxxxxxx>
Betreff: Re: [ceph-users] Help ! how to recover from total monitor failure in lumnious

https://shaman.ceph.com/repos/ceph/wip-22847-luminous/f04a4a36f01fdd5d9276fa5cfa1940f5cc11fb81/

 

Gesendet: Freitag, 02. Februar 2018 um 21:27 Uhr
Von: "Frank Li" <frli@xxxxxxxxxxxxxxxxxxxx>
An: "Sage Weil" <sage@xxxxxxxxxxxx>
Cc: "ceph-users@xxxxxxxxxxxxxx" <ceph-users@xxxxxxxxxxxxxx>
Betreff: Re: [ceph-users] Help ! how to recover from total monitor failure in lumnious

Sure, please let me know where to get and run the binaries. Thanks for the fast response !

--
Efficiency is Intelligent Laziness
On 2/2/18, 10:31 AM, "Sage Weil" <sage@xxxxxxxxxxxx> wrote:

On Fri, 2 Feb 2018, Frank Li wrote:
> Yes, I was dealing with an issue where OSD are not peerings, and I was trying to see if force-create-pg can help recover the peering.
> Data lose is an accepted possibility.
>
> I hope this is what you are looking for ?
>
> -3> 2018-01-31 22:47:22.942394 7fc641d0b700 5 mon.dl1-kaf101@0(electing) e6 _ms_dispatch setting monitor caps on this connection
> -2> 2018-01-31 22:47:22.942405 7fc641d0b700 5 mon.dl1-kaf101@0(electing).paxos(paxos recovering c 28110997..28111530) is_readable = 0 - now=2018-01-31 22:47:22.942405 lease_expire=0.000000 has v0 lc 28111530
> -1> 2018-01-31 22:47:22.942422 7fc641d0b700 5 mon.dl1-kaf101@0(electing).paxos(paxos recovering c 28110997..28111530) is_readable = 0 - now=2018-01-31 22:47:22.942422 lease_expire=0.000000 has v0 lc 28111530
> 0> 2018-01-31 22:47:22.955415 7fc64350e700 -1 /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/12.2.2/rpm/el7/BUILD/ceph-12.2.2/src/osd/OSDMapMapping.h: In function 'void OSDMapMapping::get(pg_t, std::vector<int>*, int*, std::vector<int>*, int*) const' thread 7fc64350e700 time 2018-01-31 22:47:22.952877
> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/12.2.2/rpm/el7/BUILD/ceph-12.2.2/src/osd/OSDMapMapping.h: 288: FAILED assert(pgid.ps() < p->second.pg_num)

Perfect, thanks! I have a test fix for this pushed to wip-22847-luminous
which should appear on shaman.ceph.com in an hour or so; can you give that
a try? (Only need to install the updated package on the mons.)

Thanks!
sage


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

 

 

_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux