Re: Problem with adopting 15.2.14 cluster with cephadm on CentOS 7

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

it turns out that I was a bit confused. I already had my cluster upgrade to
v15/octopus and was now incorrectly using the image for v14/nautilus which
of course don't work as downgrades are not expected. Please ignore my email.

Cheers,
Manuel


On Mon, Sep 27, 2021 at 11:52 AM Manuel Holtgrewe <zyklenfrei@xxxxxxxxx>
wrote:

> Hi,
>
> I have a 15.2.14 ceph cluster running on an up to date CentOS 7 that I
> want to adopt to cephadm. I'm trying to follow this:
>
> https://docs.ceph.com/en/pacific/cephadm/adoption/
>
> However, I am failing to adopt the monitors. I've tried the process a
> couple of time with rolling back everything through reversing the file
> moves that cephadm does. The output of /var/log/ceph/cephadm.log is
> attached. The output of the running monitor contains lines such as the
> following:
>
> 2021-09-27T09:38:44.230+0200 7f529197d700  4 rocksdb: EVENT_LOG_v1
> {"time_micros": 1632728324231255, "job": 7642, "event":
> "table_file_deletion", "file_number": 769569}
> 2021-09-27T09:38:44.236+0200 7f529197d700  4 rocksdb: EVENT_LOG_v1
> {"time_micros": 1632728324236907, "job": 7642, "event":
> "table_file_deletion", "file_number": 769567}
> 2021-09-27T09:38:44.251+0200 7f529197d700  4 rocksdb: EVENT_LOG_v1
> {"time_micros": 1632728324252580, "job": 7642, "event":
> "table_file_deletion", "file_number": 769566}
> 2021-09-27T09:38:44.267+0200 7f529197d700  4 rocksdb: EVENT_LOG_v1
> {"time_micros": 1632728324268420, "job": 7642, "event":
> "table_file_deletion", "file_number": 769565}
> 2021-09-27T09:38:44.280+0200 7f529217e700 -1 received  signal: Terminated
> from /usr/lib/systemd/systemd --switched-root --system --deserialize 22
>  (PID: 1) UID: 0
> 2021-09-27T09:38:44.280+0200 7f529217e700 -1 mon.osd-mirror-2@1(leader)
> e3 *** Got Signal Terminated ***
> 2021-09-27T09:38:44.280+0200 7f529217e700  1 mon.osd-mirror-2@1(leader)
> e3 shutdown
> 2021-09-27T09:38:44.283+0200 7f529197d700  4 rocksdb: EVENT_LOG_v1
> {"time_micros": 1632728324284373, "job": 7642, "event":
> "table_file_deletion", "file_number": 769564}
> 2021-09-27T09:38:44.290+0200 7f52a218a300  1 rocksdb: close waiting for
> compaction thread to stop
> 2021-09-27T09:38:44.301+0200 7f529197d700  4 rocksdb: EVENT_LOG_v1
> {"time_micros": 1632728324301794, "job": 7642, "event":
> "table_file_deletion", "file_number": 769563}
> 2021-09-27T09:38:44.301+0200 7f52a218a300  1 rocksdb: close compaction
> thread to stopped
> 2021-09-27T09:38:44.303+0200 7f52a218a300  4 rocksdb: [db/db_impl.cc:390]
> Shutdown: canceling all background work
> 2021-09-27T09:38:44.305+0200 7f52a218a300  4 rocksdb: [db/db_impl.cc:563]
> Shutdown complete
>
> Would anyone have an idea how to proceed here?
>
> Thanks,
> Manuel
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux