No MDS No FS after update and restart - respectfully request help to rebuild FS and maps

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Using 9 node cluster on Proxmox
Ceph was updated automatically with system update
When rebooted nodes I did not set noout and just rebooted nodes - might be
root cause of all the rebalancing and lost connectivity...

I see monitors active (running) with systemctl status ceph-mon@
<nodename>.service

I see my HDD physical disks still assigned as OSD's 1-14 or whatever I have

Ceph -s hangs
Ceph commands pretty much all hang. Using system control I can turn
monitors off and stop all services - I think...

sudo systemctl stop ceph\*.service ceph\*.target

seems to have stopped the services on all nodes... restarting does not help.

/var/log/ceph is full of logs as expected (not sure where to start with
them to look at issues)

Pools and MDS do not show up anymore -maybe got purged or deleted...

Wondering if there is a way to undelete maps and pool data from debian host
or use ceph tools to make the same name file system, rebuild the map and
not lose data on the ODS stores...

SEE the following posts I was posting screenshots and logs and trying to
sort it out but got nowhere... came here to list for last resort before I
nuke it all and start over.. would like to save some VM's I have on those
OSD's.

https://forum.proxmox.com/threads/ceph-not-working-monitors-and-managers-lost.100672/page-2

I read on
https://lists.ceph.io/hyperkitty/list/ceph-users@xxxxxxx/message/K5X6DPSUHMPZ3P7ADV64B4YLPQPWQS5J/

it might be possible to just overwrite the name and let the system "heal"
but seems risky and I am not not exactly sure what command to use for
making the same name file system and setting up the same metadata info
too... where can I go about looking in logs or proxmox or otherwise to make
sure I do use the correct names if this is an option - or is there any
better documented way to recover a lost filesystem and restore quorum?

Home-brew and learning as I go.. so forgive the lack of expertise here - I
am learning.

Respectfully,

William Henderson
316-518-9350
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux