Re: [Urgent] Ceph system Down, Ceph FS volume in recovering

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



What does the MDS log when it crashes?

Zitat von nguyenvandiep@xxxxxxxxxxxxxx:

We have 6 node ( 3 OSD-node and 3 service node), t2/3 OSD nodes was powered off and we got big problem
pls check ceph-s result below
now we cannot start mds service, ( we tried to start but it stopped after 2 minute)
Now my application cannot access to NFS exported Folder

What should we do

[root@cephgw01 /]# ceph -s
  cluster:
    id:     258af72a-cff3-11eb-a261-d4f5ef25154c
    health: HEALTH_WARN
            3 failed cephadm daemon(s)
            1 filesystem is degraded
            insufficient standby MDS daemons available
            1 nearfull osd(s)
Low space hindering backfill (add storage if this doesn't resolve itself): 21 pgs backfill_toofull
            15 pool(s) nearfull
            11 daemons have recently crashed

  services:
mon: 6 daemons, quorum cephgw03,cephosd01,cephgw01,cephosd03,cephgw02,cephosd02 (age 30h) mgr: cephgw01.vwoffq(active, since 17h), standbys: cephgw02.nauphz, cephgw03.aipvii
    mds:         1/1 daemons up
osd: 29 osds: 29 up (since 40h), 29 in (since 29h); 402 remapped pgs
    rgw:         2 daemons active (2 hosts, 1 zones)
    tcmu-runner: 18 daemons active (2 hosts)

  data:
    volumes: 0/1 healthy, 1 recovering
    pools:   15 pools, 1457 pgs
    objects: 36.87M objects, 25 TiB
    usage:   75 TiB used, 41 TiB / 116 TiB avail
    pgs:     17759672/110607480 objects misplaced (16.056%)
             1055 active+clean
             363  active+remapped+backfill_wait
             18   active+remapped+backfilling
             14   active+remapped+backfill_toofull
             7    active+remapped+backfill_wait+backfill_toofull

  io:
    client:   2.0 MiB/s rd, 395 KiB/s wr, 73 op/s rd, 19 op/s wr
    recovery: 32 MiB/s, 45 objects/s
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux