Re: mgr active s01 reboot

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> Op 22 februari 2017 om 13:22 schreef "Ammerlaan, A.J.G." <A.J.G.Ammerlaan@xxxxxxxxxxxxx>:
> 
> 
> Hello All,
> 
> We are running prod. now. We have configured our Ceph cluster as by guide. 
> When our active mgr, also monitoring server s01 will be shutdown, than our ceph filesystems will disappear.
> We have 3x monitor servers.
> 
> Can we do something about this, like a redundant mgr server?
> 

You only have one Metadata server running and that's causing your issues here:

fsmap e64: 1/1/1 up {0=s01=up:active}

Try spawning multiple CephFS MDS servers to make your FileSystem highly available. The standby will take over once the Active one goes down.

P.S.: Removing all other lists. The ceph-users list would have been the right location for this e-mail.

Wido

> 
>  ceph --watch-error
>     cluster 62b32e0b-90e2-4285-87f9-8dd874d3ac8f
>      health HEALTH_WARN
>             147 pgs degraded
>             11 pgs recovering
>             136 pgs recovery_wait
>             147 pgs stuck degraded
>             147 pgs stuck unclean
>             recovery 56000/2009180 objects degraded (2.787%)
>      monmap e4: 3 mons at {s01=172.30.0.1:6789/0,s02=172.30.0.2:6789/0,s03=172.30.0.3:6789/0}
>             election epoch 74, quorum 0,1,2 s01,s02,s03
>       fsmap e64: 1/1/1 up {0=s01=up:active}
>         mgr active: s01 
>      osdmap e1972: 110 osds: 110 up, 110 in
>             flags sortbitwise,require_jewel_osds,require_kraken_osds
>       pgmap v5998490: 6370 pgs, 10 pools, 3591 GB data, 981 kobjects
>             7324 GB used, 342 TB / 349 TB avail
>             56000/2009180 objects degraded (2.787%)
>                 6223 active+clean
>                  136 active+recovery_wait+degraded
>                   11 active+recovering+degraded
> recovery io 204 MB/s, 95 keys/s, 59 objects/s
> 
> Regards, Arnoud.
> 
> ------------------------------------------------------------------------------
> 
> De informatie opgenomen in dit bericht kan vertrouwelijk zijn en is
> uitsluitend bestemd voor de geadresseerde. Indien u dit bericht onterecht
> ontvangt, wordt u verzocht de inhoud niet te gebruiken en de afzender direct
> te informeren door het bericht te retourneren. Het Universitair Medisch
> Centrum Utrecht is een publiekrechtelijke rechtspersoon in de zin van de W.H.W.
> (Wet Hoger Onderwijs en Wetenschappelijk Onderzoek) en staat geregistreerd bij
> de Kamer van Koophandel voor Midden-Nederland onder nr. 30244197.
> 
> Denk s.v.p aan het milieu voor u deze e-mail afdrukt.
> 
> ------------------------------------------------------------------------------
> 
> This message may contain confidential information and is intended exclusively
> for the addressee. If you receive this message unintentionally, please do not
> use the contents but notify the sender immediately by return e-mail. University
> Medical Center Utrecht is a legal person by public law and is registered at
> the Chamber of Commerce for Midden-Nederland under no. 30244197.
> 
> Please consider the environment before printing this e-mail.
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux