Re: Filesystem is degraded, offline, mds daemon damaged

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

your Ceph filesystem is damaged (or do you have multiple?), check the MDS logs to see why it is failing, and share that information here. Also please share 'ceph fs status'.

Regards,
Eugen

Zitat von bpurvis@xxxxxxx:

I am really hoping you can help. THANKS in advance. I have inherited a Docker swarm running CEPH but I know very little about it. Current I have an unhealthy ceph environment that will not mount my data drive.
Its a cluster of 4 vm servers. docker01,docker02, docker03, docker-cloud
CL has the /data that is on a separate drive, currently failing to mount.
how can I recover this without loosing the data?

on server docker-cloud, mount /data returns:
mount error 113 = No route to host

docker ps is healthy on all nodes.
bc81d14dde92 ceph/daemon:latest-mimic "/opt/ceph-container…" 2 years ago Up 34 minutes ceph-mds d4fecec5e0e8 ceph/daemon:latest-mimic "/opt/ceph-container…" 2 years ago Up 34 minutes ceph-osd 482ba41803af ceph/daemon:latest-mimic "/opt/ceph-container…" 2 years ago Up 34 minutes ceph-mgr d6a5c44179c7 ceph/daemon:latest-mimic "/opt/ceph-container…" 2 years ago Up 32 minutes ceph-mon

ceph -s:
  cluster:
    id:     7a5b2243-8e92-4e03-aee7-aa64cea666ec
    health: HEALTH_ERR
            1 filesystem is degraded
            1 filesystem is offline
            1 mds daemon damaged
            noout,noscrub,nodeep-scrub flag(s) set
clock skew detected on mon.docker02, mon.docker03, mon.docker-cloud mons docker-cloud,docker01,docker02,docker03 are low on available space

  services:
    mon: 4 daemons, quorum docker01,docker02,docker03,docker-cloud
    mgr: docker01(active), standbys: docker02, docker03, docker-cloud
    mds: cephfs-0/1/1 up , 4 up:standby, 1 damaged
    osd: 4 osds: 4 up, 4 in
         flags noout,noscrub,nodeep-scrub

  data:
    pools:   2 pools, 256 pgs
    objects: 194.2 k objects, 241 GiB
    usage:   499 GiB used, 1.5 TiB / 2.0 TiB avail
    pgs:     256 active+clean
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux