Re: [Urgent] Ceph system Down, Ceph FS volume in recovering

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Do you have the possibility to stop/unmount cephfs clients ?

If so, do that and restart the MDS.
It should restart.

Have the clients restart one by one and check that the MDS does not crash
(by monitoring the logs)

________________________________________________________

Cordialement,

*David CASIER*




*Ligne directe: +33(0) 9 72 61 98 29*
________________________________________________________



Le sam. 24 févr. 2024 à 10:01, <nguyenvandiep@xxxxxxxxxxxxxx> a écrit :

> Hi Mathew
>
> Pls chekc my ceph -s
>
> ceph -s
>   cluster:
>     id:     258af72a-cff3-11eb-a261-d4f5ef25154c
>     health: HEALTH_WARN
>             3 failed cephadm daemon(s)
>             1 filesystem is degraded
>             insufficient standby MDS daemons available
>             1 nearfull osd(s)
>             Low space hindering backfill (add storage if this doesn't
> resolve itself):
> 21 pgs backfill_toofull
>             15 pool(s) nearfull
>             11 daemons have recently crashed
>
>   services:
>     mon:         6 daemons, quorum
> cephgw03,cephosd01,cephgw01,cephosd03,cephgw02,cephosd02 (age 30h)
>     mgr:         cephgw01.vwoffq(active, since 17h), standbys:
> cephgw02.nauphz,
> cephgw03.aipvii
>     mds:         1/1 daemons up
>     osd:         29 osds: 29 up (since 40h), 29 in (since 29h); 402
> remapped pgs
>     rgw:         2 daemons active (2 hosts, 1 zones)
>     tcmu-runner: 18 daemons active (2 hosts)
>
>   data:
>     volumes: 0/1 healthy, 1 recovering
>     pools:   15 pools, 1457 pgs
>     objects: 36.87M objects, 25 TiB
>     usage:   75 TiB used, 41 TiB / 116 TiB avail
>     pgs:     17759672/110607480 objects misplaced (16.056%)
>              1055 active+clean
>              363  active+remapped+backfill_wait
>              18   active+remapped+backfilling
>              14   active+remapped+backfill_toofull
>              7
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux