Re: desaster recovery Ceph Storage , urgent help needed

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

On 10/23/20 2:22 PM, Gerhard W. Recher wrote:
This is a proxmox cluster ...
sorry for formating problems of my post :(

short plot, we messed with ip addr. change of public network, so
monitors went down.


*snipsnap*

so howto recover from this disaster ?

# ceph -s
   cluster:
     id:     92d063d7-647c-44b8-95d7-86057ee0ab22
     health: HEALTH_WARN
             1 daemons have recently crashed
             OSD count 0 < osd_pool_default_size 3

   services:
     mon: 3 daemons, quorum pve01,pve02,pve03 (age 19h)
     mgr: pve01(active, since 19h)
     osd: 0 osds: 0 up, 0 in

   data:
     pools:   0 pools, 0 pgs
     objects: 0 objects, 0 B
     usage:   0 B used, 0 B / 0 B avail
     pgs:

Are you sure that the existing mons have been restarted? If the mon database is still present, the status output should contain at least the pool and osd information. But those numbers are zero...


Please check the local osd logs for the actual reason of the failed restart.


Regards,

Burkhard

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux