Re: Error ENOENT: Module not found

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi, your messages are a bit confusing, mixing up different things.
If you still have 5 hosts and the same EC profile as last week, you won’t be able to drain a host since there is nowhere to recover to. You need 5 hosts to be able to fully recover from a failing host (or trying to drain one). What you paste here is only a config history, it doesn’t reflect the current state. Regarding MONs, you’ll need to provide more details what the current state is (ceph orch ls mon) and what you expect it to be.

Zitat von Devender Singh <devender@xxxxxxxxxx>:

It got resolved but why I am seeing removed host..
Also, this when running mons as unmanaged it shows 5 whereas I have removed 3rd host but why below entries too?


# ceph config-key ls |grep -i 03n |awk -F"," '{print $1}'
    "config-history/135/+osd/host:node03/osd_memory_target"
    "config-history/14990/+osd/host:node03/osd_memory_target"
    "config-history/14990/-osd/host:node03/osd_memory_target"
    "config-history/15003/+osd/host:node03/osd_memory_target"
    "config-history/15003/-osd/host:node03/osd_memory_target"
    "config-history/15016/+osd/host:node03/osd_memory_target"
    "config-history/15016/-osd/host:node03/osd_memory_target"
    "config-history/15017/+osd/host:node03/osd_memory_target"
    "config-history/15017/-osd/host:node03/osd_memory_target"
    "config-history/15022/+osd/host:node03/osd_memory_target"
    "config-history/15022/-osd/host:node03/osd_memory_target"
    "config-history/15024/+osd/host:node03/osd_memory_target"
    "config-history/15024/-osd/host:node03/osd_memory_target"
    "config-history/15025/+osd/host:node03/osd_memory_target"
    "config-history/15025/-osd/host:node03/osd_memory_target"
    "config-history/15092/+osd/host:node03/osd_memory_target"
    "config-history/15093/+osd/host:node03/osd_memory_target"
    "config-history/15093/-osd/host:node03/osd_memory_target"
    "config-history/15094/+osd/host:node03/osd_memory_target"
    "config-history/15094/-osd/host:node03/osd_memory_target"
    "config-history/15095/+osd/host:node03/osd_memory_target"
    "config-history/15095/-osd/host:node03/osd_memory_target"
    "config-history/15096/+osd/host:node03/osd_memory_target"
    "config-history/15096/-osd/host:node03/osd_memory_target"
    "config-history/15098/-osd/host:node03/osd_memory_target"
    "config-history/15099/+osd/host:node03/osd_memory_target"
    "config-history/15100/+osd/host:node03/osd_memory_target"
    "config-history/15100/-osd/host:node03/osd_memory_target"
    "config-history/15108/+osd/host:node03/osd_memory_target"
    "config-history/15108/-osd/host:node03/osd_memory_target"
    "config-history/15125/+osd/host:node03/osd_memory_target"
    "config-history/15125/-osd/host:node03/osd_memory_target"
    "config-history/15126/+osd/host:node03/osd_memory_target"
    "config-history/15126/-osd/host:node03/osd_memory_target"
    "config-history/15127/+osd/host:node03/osd_memory_target"
    "config-history/15127/-osd/host:node03/osd_memory_target"
    "config-history/15128/+osd/host:node03/osd_memory_target"
    "config-history/15128/-osd/host:node03/osd_memory_target"
    "config-history/15129/+osd/host:node03/osd_memory_target"
    "config-history/15129/-osd/host:node03/osd_memory_target"
    "config-history/15130/+osd/host:node03/osd_memory_target"
    "config-history/15130/-osd/host:node03/osd_memory_target"
    "config-history/15131/+osd/host:node03/osd_memory_target"
    "config-history/15131/-osd/host:node03/osd_memory_target"
    "config-history/15132/+osd/host:node03/osd_memory_target"
    "config-history/15132/-osd/host:node03/osd_memory_target"
    "config-history/15133/+osd/host:node03/osd_memory_target"
    "config-history/15133/-osd/host:node03/osd_memory_target"
    "config-history/15134/-osd/host:node03/osd_memory_target"
    "config-history/153/+osd/host:node03/osd_memory_target"
    "config-history/153/-osd/host:node03/osd_memory_target"
    "config-history/176/+client.crash.node03/container_image"
    "config-history/182/-client.crash.node03/container_image"
    "config-history/4276/+osd/host:node03/osd_memory_target"
    "config-history/4276/-osd/host:node03/osd_memory_target"
    "config-history/433/+client.ceph-exporter.node03/container_image"
    "config-history/439/-client.ceph-exporter.node03/container_image"
    "config-history/459/+osd/host:node03/osd_memory_target"
    "config-history/459/-osd/host:node03/osd_memory_target"
    "config-history/465/+osd/host:node03/osd_memory_target"
    "config-history/465/-osd/host:node03/osd_memory_target"
    "config-history/4867/+osd/host:node03/osd_memory_target"
    "config-history/4867/-osd/host:node03/osd_memory_target"
    "config-history/4889/+client.crash.node03/container_image"
    "config-history/4895/-client.crash.node03/container_image"
    "config-history/5139/+mds.k8s-dev-cephfs.node03.iebxqn/container_image"
    "config-history/5142/-mds.k8s-dev-cephfs.node03.iebxqn/container_image"
    "config-history/5150/+client.ceph-exporter.node03/container_image"
    "config-history/5156/-client.ceph-exporter.node03/container_image"
    "config-history/5179/+osd/host:node03/osd_memory_target"
    "config-history/5179/-osd/host:node03/osd_memory_target"
    "config-history/5183/+client.rgw.sea-dev.node03.betyqd/rgw_frontends"
    "config-history/5189/+osd/host:node03/osd_memory_target"
    "config-history/5189/-osd/host:node03/osd_memory_target"
    "config-history/6929/-client.rgw.sea-dev.node03.betyqd/rgw_frontends"
    "config-history/6933/+osd/host:node03/osd_memory_target"
    "config-history/6933/-osd/host:node03/osd_memory_target"
    "config-history/9710/+osd/host:node03/osd_memory_target"
    "config-history/9710/-osd/host:node03/osd_memory_target”


Regards
Dev

On Jan 31, 2025, at 10:55 PM, Devender Singh <devender@xxxxxxxxxx> wrote:

Hello

Need some help..

Tried draining host but it got stuck and now orchestrator is not running. But cluster health is OK , also I adde host back.
# ceph health detail
HEALTH_OK

Tried adding blank queue file,, ceph config-key set mgr/cephadm/osd_remove_queue -I osd_remove_queue_blank.json and mgr fail but did not work..

# ceph config-key get mgr/cephadm/osd_remove_queue
[{"osd_id": 27, "started": true, "draining": false, "stopped": false, "replace": false, "force": false, "zap": false, "hostname": "node02", "drain_started_at": null, "drain_stopped_at": null, "drain_done_at": null, "process_started_at": "2025-01-31T23:54:26.100700Z"}, {"osd_id": 31, "started": true, "draining": false, "stopped": false, "replace": false, "force": false, "zap": false, "hostname": "node02", "drain_started_at": null, "drain_stopped_at": null, "drain_done_at": null, "process_started_at": "2025-01-31T23:54:25.790403Z"}, {"osd_id": 35, "started": true, "draining": false, "stopped": false, "replace": false, "force": false, "zap": false, "hostname": "node02", "drain_started_at": null, "drain_stopped_at": null, "drain_done_at": null, "process_started_at": "2025-01-31T23:54:25.857252Z"}, {"osd_id": 38, "started": true, "draining": false, "stopped": false, "replace": false, "force": false, "zap": false, "hostname": "node02", "drain_started_at": null, "drain_stopped_at": null, "drain_done_at": null, "process_started_at": "2025-01-31T23:54:26.192332Z"}, {"osd_id": 42, "started": true, "draining": false, "stopped": false, "replace": false, "force": false, "zap": false, "hostname": "node02", "drain_started_at": null, "drain_stopped_at": null, "drain_done_at": null, "process_started_at": "2025-01-31T23:54:26.236171Z"}, {"osd_id": 44, "started": true, "draining": false, "stopped": false, "replace": false, "force": false, "zap": false, "hostname": "node02", "drain_started_at": null, "drain_stopped_at": null, "drain_done_at": null, "process_started_at": "2025-01-31T23:54:26.000889Z"}, {"osd_id": 49, "started": true, "draining": false, "stopped": false, "replace": false, "force": false, "zap": false, "hostname": "node02", "drain_started_at": null, "drain_stopped_at": null, "drain_done_at": null, "process_started_at": "2025-01-31T23:54:26.027052Z"}, {"osd_id": 54, "started": true, "draining": false, "stopped": false, "replace": false, "force": false, "zap": false, "hostname": "node02", "drain_started_at": null, "drain_stopped_at": null, "drain_done_at": null, "process_started_at": "2025-01-31T23:54:25.974898Z"}, {"osd_id": 58, "started": true, "draining": false, "stopped": false, "replace": false, "force": false, "zap": false, "hostname": "node02", "drain_started_at": null, "drain_stopped_at": null, "drain_done_at": null, "process_started_at": "2025-01-31T23:54:25.834649Z"}, {"osd_id": 62, "started": true, "draining": false, "stopped": false, "replace": false, "force": false, "zap": false, "hostname": "node02", "drain_started_at": null, "drain_stopped_at": null, "drain_done_at": null, "process_started_at": "2025-01-31T23:54:25.812439Z"}, {"osd_id": 66, "started": true, "draining": false, "stopped": false, "replace": false, "force": false, "zap": false, "hostname": "node02", "drain_started_at": null, "drain_stopped_at": null, "drain_done_at": null, "process_started_at": "2025-01-31T23:54:25.902356Z"}, {"osd_id": 70, "started": true, "draining": false, "stopped": false, "replace": false, "force": false, "zap": false, "hostname": "node02", "drain_started_at": null, "drain_stopped_at": null, "drain_done_at": null, "process_started_at": "2025-01-31T23:54:25.726199Z"}, {"osd_id": 74, "started": true, "draining": false, "stopped": false, "replace": false, "force": false, "zap": false, "hostname": "node02", "drain_started_at": null, "drain_stopped_at": null, "drain_done_at": null, "process_started_at": "2025-01-31T23:54:25.881018Z"}, {"osd_id": 78, "started": true, "draining": true, "stopped": false, "replace": false, "force": false, "zap": false, "hostname": "node02", "drain_started_at": "2025-02-01T00:05:02.498519Z", "drain_stopped_at": null, "drain_done_at": null, "process_started_at": "2025-01-31T23:54:26.078261Z"}, {"osd_id": 82, "started": true, "draining": true, "stopped": false, "replace": false, "force": false, "zap": false, "hostname": "node02", "drain_started_at": "2025-02-01T00:03:48.916299Z", "drain_stopped_at": null, "drain_done_at": null, "process_started_at": "2025-01-31T23:54:25.948462Z"}, {"osd_id": 86, "started": true, "draining": true, "stopped": false, "replace": false, "force": false, "zap": false, "hostname": "node02", "drain_started_at": "2025-02-01T00:02:28.907365Z", "drain_stopped_at": null, "drain_done_at": null, "process_started_at": "2025-01-31T23:54:26.148633Z"}, {"osd_id": 89, "started": true, "draining": true, "stopped": false, "replace": false, "force": false, "zap": false, "hostname": "node02", "drain_started_at": "2025-02-01T00:01:15.727422Z", "drain_stopped_at": null, "drain_done_at": null, "process_started_at": "2025-01-31T23:54:26.117332Z"}, {"osd_id": 94, "started": true, "draining": true, "stopped": false, "replace": false, "force": false, "zap": false, "hostname": "node02", "drain_started_at": "2025-01-31T23:59:54.173088Z", "drain_stopped_at": null, "drain_done_at": null, "process_started_at": "2025-01-31T23:54:25.768943Z"}, {"osd_id": 98, "started": true, "draining": true, "stopped": false, "replace": false, "force": false, "zap": false, "hostname": "node02", "drain_started_at": "2025-01-31T23:57:24.192595Z", "drain_stopped_at": null, "drain_done_at": null, "process_started_at": "2025-01-31T23:54:26.053079Z"}, {"osd_id": 102, "started": true, "draining": true, "stopped": false, "replace": false, "force": false, "zap": false, "hostname": "node02", "drain_started_at": "2025-01-31T23:57:25.201520Z", "drain_stopped_at": null, "drain_done_at": null, "process_started_at": "2025-01-31T23:54:25.927255Z"}, {"osd_id": 106, "started": true, "draining": true, "stopped": false, "replace": false, "force": false, "zap": false, "hostname": "node02", ""drain_started_at": "2025-01-31T23:57:09.422738Z", "drain_stopped_at": null, "drain_done_at": null, "process_started_at": "2025-01-31T23:54:26.171455Z"}, {"osd_id": 111, "started": true, "draining": true, "stopped": false, "replace": false, "force": false, "zap": false, "hostname": "node02", "drain_started_at": "2025-01-31T23:56:33.636189Z", "drain_stopped_at": null, "drain_done_at": null, "process_started_at": "2025-01-31T23:54:25.748019Z"}, {"osd_id": 115, "started": true, "draining": true, "stopped": false, "replace": false, "force": false, "zap": false, "hostname": "node02", "drain_started_at": "2025-01-31T23:56:34.674664Z", "drain_stopped_at": null, "drain_done_at": null, "process_started_at": "2025-01-31T23:54:26.213363Z”}]



Regards
Dev




_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux