Re: OSDs not starting up <SOLVED>

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello All,

I have managed to identify and fix this issue. For anyone else encountering it 
the solution was the OSDs were trying to communicate with old monitors and not 
the running ones in the cluster.

On each host find the osd configuration directory '/var/lib/ceph/<fsid>/osd.0/' 
and edit the file called 'config'. Insert the correct ip addresses of the 
monitors then restart the host.

This fixed it for me. I was surprised the configuration files were not updated 
with the running configuration.

Anyway, thanks for all your help and suggestions.

Regards,

Stephen.



On Saturday, 13 November 2021 15:54:18 GMT Stephen J. Thompson wrote:
> Hello and thanks for that. I am now able to retrieve logs.
> 
> This is for a different OSD, one I have not been playing with trying to find
> out 
 the issue. But has the same problems.
> 
> -- Boot 697340cb8e134020bd4eed4a49351bc8 --
> Nov 12 15:00:44 cephnode03 systemd[1]: Starting Ceph osd.8 for 
> d82ced8c-1f6e-11ec-a2e4-00fd45fcaf9c...
> Nov 12 15:00:46 cephnode03 podman[1168]: 2021-11-12 15:00:46.418960924 +0000
> 
 GMT m=+0.338572668 container create
> 5cff11c4c70c0776658905a187c0f34ad8a8b88d399abf6f773d3c04994b58c6 
> (image=quay.io/ceph/
> ceph@sha256:285d2abb6e74bdc6e15e1af585aa19f132045651b7a80eb77b9cec8a>
> Nov 12 15:00:46 cephnode03 podman[1168]: 2021-11-12 15:00:46.641658452 +0000
> 
 GMT m=+0.561270185 container init
> 5cff11c4c70c0776658905a187c0f34ad8a8b88d399abf6f773d3c04994b58c6 
> (image=quay.io/ceph/
> ceph@sha256:285d2abb6e74bdc6e15e1af585aa19f132045651b7a80eb77b9cec8a78>
> Nov 12 15:00:46 cephnode03 podman[1168]: 2021-11-12 15:00:46.716169028 +0000
> 
 GMT m=+0.635780791 container start
> 5cff11c4c70c0776658905a187c0f34ad8a8b88d399abf6f773d3c04994b58c6 
> (image=quay.io/ceph/
> ceph@sha256:285d2abb6e74bdc6e15e1af585aa19f132045651b7a80eb77b9cec8a7>
> Nov 12 15:00:46 cephnode03 podman[1168]: 2021-11-12 15:00:46.716712108 +0000
> 
 GMT m=+0.636323848 container attach
> 5cff11c4c70c0776658905a187c0f34ad8a8b88d399abf6f773d3c04994b58c6 
> (image=quay.io/ceph/
> ceph@sha256:285d2abb6e74bdc6e15e1af585aa19f132045651b7a80eb77b9cec8a>
> Nov 12 15:02:44 cephnode03 systemd[1]: ceph-d82ced8c-1f6e-11ec-
> a2e4-00fd45fcaf9c@osd.8.service: start operation timed out. Terminating.
> Nov 12 15:02:45 cephnode03 podman[1854]: 2021-11-12 15:02:45.295203029 +0000
> 
 GMT m=+0.260481391 container create
> 3849fa85cedad0de442eaa41012ba0d6f7821056b66388a277b433f45134d0a4 
> (image=quay.io/ceph/
> ceph@sha256:285d2abb6e74bdc6e15e1af585aa19f132045651b7a80eb77b9cec8a>
> Nov 12 15:02:45 cephnode03 podman[1854]: 2021-11-12 15:02:45.397836033 +0000
> 
 GMT m=+0.363114423 container init
> 3849fa85cedad0de442eaa41012ba0d6f7821056b66388a277b433f45134d0a4 
> (image=quay.io/ceph/
> ceph@sha256:285d2abb6e74bdc6e15e1af585aa19f132045651b7a80eb77b9cec8a78>
> Nov 12 15:02:45 cephnode03 podman[1854]: 2021-11-12 15:02:45.410880539 +0000
> 
 GMT m=+0.376158923 container start
> 3849fa85cedad0de442eaa41012ba0d6f7821056b66388a277b433f45134d0a4 
> (image=quay.io/ceph/
> ceph@sha256:285d2abb6e74bdc6e15e1af585aa19f132045651b7a80eb77b9cec8a7>
> Nov 12 15:02:45 cephnode03 podman[1854]: 2021-11-12 15:02:45.411070883 +0000
> 
 GMT m=+0.376349235 container attach
> 3849fa85cedad0de442eaa41012ba0d6f7821056b66388a277b433f45134d0a4 
> (image=quay.io/ceph/
> ceph@sha256:285d2abb6e74bdc6e15e1af585aa19f132045651b7a80eb77b9cec8a>
> Nov 12 15:02:46 cephnode03 podman[1854]: 2021-11-12 15:02:46.317833738 +0000
> 
 GMT m=+1.283112131 container died
> 3849fa85cedad0de442eaa41012ba0d6f7821056b66388a277b433f45134d0a4 
> (image=quay.io/ceph/
> ceph@sha256:285d2abb6e74bdc6e15e1af585aa19f132045651b7a80eb77b9cec8a78>
> Nov 12 15:02:46 cephnode03 podman[1854]: 2021-11-12 15:02:46.400485223 +0000
> 
 GMT m=+1.365763636 container remove
> 3849fa85cedad0de442eaa41012ba0d6f7821056b66388a277b433f45134d0a4 
> (image=quay.io/ceph/
> ceph@sha256:285d2abb6e74bdc6e15e1af585aa19f132045651b7a80eb77b9cec8a>
> Nov 12 15:02:46 cephnode03 systemd[1]: ceph-d82ced8c-1f6e-11ec-
> a2e4-00fd45fcaf9c@osd.8.service: Failed with result 'timeout'.
> Nov 12 15:02:46 cephnode03 systemd[1]: ceph-d82ced8c-1f6e-11ec-
> a2e4-00fd45fcaf9c@osd.8.service: Unit process 574 (bash) remains running
> after 
 unit stopped.
> Nov 12 15:02:46 cephnode03 systemd[1]: ceph-d82ced8c-1f6e-11ec-
> a2e4-00fd45fcaf9c@osd.8.service: Unit process 1168 (podman) remains running
> 
 after unit stopped.
> Nov 12 15:02:46 cephnode03 systemd[1]: ceph-d82ced8c-1f6e-11ec-
> a2e4-00fd45fcaf9c@osd.8.service: Unit process 1377 (conmon) remains running
> 
 after unit stopped.
> Nov 12 15:02:46 cephnode03 systemd[1]: ceph-d82ced8c-1f6e-11ec-
> a2e4-00fd45fcaf9c@osd.8.service: Unit process 1916 (conmon) remains running
> 
 after unit stopped.
> Nov 12 15:02:46 cephnode03 systemd[1]: ceph-d82ced8c-1f6e-11ec-
> a2e4-00fd45fcaf9c@osd.8.service: Unit process 1998 (podman) remains running
> 
 after unit stopped.
> Nov 12 15:02:46 cephnode03 systemd[1]: ceph-d82ced8c-1f6e-11ec-
> a2e4-00fd45fcaf9c@osd.8.service: Unit process 2023 (podman) remains running
> 
 after unit stopped.
> Nov 12 15:02:46 cephnode03 systemd[1]: Failed to start Ceph osd.8 for 
> d82ced8c-1f6e-11ec-a2e4-00fd45fcaf9c.
> Nov 12 15:02:46 cephnode03 systemd[1]: ceph-d82ced8c-1f6e-11ec-
> a2e4-00fd45fcaf9c@osd.8.service: Consumed 1.068s CPU time.
> Nov 12 15:02:56 cephnode03 systemd[1]: ceph-d82ced8c-1f6e-11ec-
> a2e4-00fd45fcaf9c@osd.8.service: Scheduled restart job, restart counter is
> at 
 1.
> Nov 12 15:02:56 cephnode03 systemd[1]: Stopped Ceph osd.8 for 
> d82ced8c-1f6e-11ec-a2e4-00fd45fcaf9c.
> Nov 12 15:02:56 cephnode03 systemd[1]: ceph-d82ced8c-1f6e-11ec-
> a2e4-00fd45fcaf9c@osd.8.service: Consumed 1.082s CPU time.
> Nov 12 15:02:56 cephnode03 systemd[1]: ceph-d82ced8c-1f6e-11ec-
> a2e4-00fd45fcaf9c@osd.8.service: Found left-over process 574 (bash) in
> control 
 group while starting unit. Ignoring.
> Nov 12 15:02:56 cephnode03 systemd[1]: This usually indicates unclean 
> termination of a previous run, or service implementation deficiencies.
> Nov 12 15:02:56 cephnode03 systemd[1]: ceph-d82ced8c-1f6e-11ec-
> a2e4-00fd45fcaf9c@osd.8.service: Found left-over process 1168 (podman) in 
> control group while starting unit. Ignoring.
> Nov 12 15:02:56 cephnode03 systemd[1]: This usually indicates unclean 
> termination of a previous run, or service implementation deficiencies.
> Nov 12 15:02:56 cephnode03 systemd[1]: ceph-d82ced8c-1f6e-11ec-
> a2e4-00fd45fcaf9c@osd.8.service: Found left-over process 1377 (conmon) in 
> control group while starting unit. Ignoring.
> Nov 12 15:02:56 cephnode03 systemd[1]: This usually indicates unclean 
> termination of a previous run, or service implementation deficiencies.
> Nov 12 15:02:56 cephnode03 systemd[1]: Starting Ceph osd.8 for 
> d82ced8c-1f6e-11ec-a2e4-00fd45fcaf9c...
> Nov 12 15:02:56 cephnode03 systemd[1]: ceph-d82ced8c-1f6e-11ec-
> a2e4-00fd45fcaf9c@osd.8.service: Found left-over process 574 (bash) in
> control 
 group while starting unit. Ignoring.
> Nov 12 15:02:56 cephnode03 systemd[1]: This usually indicates unclean 
> termination of a previous run, or service implementation deficiencies.
> Nov 12 15:02:56 cephnode03 systemd[1]: ceph-d82ced8c-1f6e-11ec-
> a2e4-00fd45fcaf9c@osd.8.service: Found left-over process 1168 (podman) in 
> control group while starting unit. Ignoring.
> Nov 12 15:02:56 cephnode03 systemd[1]: This usually indicates unclean 
> termination of a previous run, or service implementation deficiencies.
> Nov 12 15:02:56 cephnode03 systemd[1]: ceph-d82ced8c-1f6e-11ec-
> a2e4-00fd45fcaf9c@osd.8.service: Found left-over process 1377 (conmon) in 
> control group while starting unit. Ignoring.
> Nov 12 15:02:56 cephnode03 systemd[1]: This usually indicates unclean 
> termination of a previous run, or service implementation deficiencies.
> Nov 12 15:02:56 cephnode03 podman[2171]: 2021-11-12 15:02:56.897495225 +0000
> 
 GMT m=+0.271553564 container stop
> 5cff11c4c70c0776658905a187c0f34ad8a8b88d399abf6f773d3c04994b58c6 
> (image=quay.io/ceph/
> ceph@sha256:285d2abb6e74bdc6e15e1af585aa19f132045651b7a80eb77b9cec8a78>
> Nov 12 15:02:56 cephnode03 podman[2171]: 2021-11-12 15:02:56.953451318 +0000
> 
 GMT m=+0.327509662 container died
> 5cff11c4c70c0776658905a187c0f34ad8a8b88d399abf6f773d3c04994b58c6 
> (image=quay.io/ceph/
> ceph@sha256:285d2abb6e74bdc6e15e1af585aa19f132045651b7a80eb77b9cec8a78>
> Nov 12 15:02:57 cephnode03 podman[2171]: 2021-11-12 15:02:57.080927142 +0000
> 
 GMT m=+0.454985464 container remove
> 5cff11c4c70c0776658905a187c0f34ad8a8b88d399abf6f773d3c04994b58c6 
> (image=quay.io/ceph/
> ceph@sha256:285d2abb6e74bdc6e15e1af585aa19f132045651b7a80eb77b9cec8a>
> Nov 12 15:02:57 cephnode03 bash[2171]: 
> 5cff11c4c70c0776658905a187c0f34ad8a8b88d399abf6f773d3c04994b58c6
> Nov 12 15:02:57 cephnode03 podman[2451]: 2021-11-12 15:02:57.938588908 +0000
> 
 GMT m=+0.497876563 container create
> 67775626001a63cb6127f670e8c6e0e98d368bac87b037a606008bde9ebd500c 
> (image=quay.io/ceph/
> ceph@sha256:285d2abb6e74bdc6e15e1af585aa19f132045651b7a80eb77b9cec8a>
> Nov 12 15:02:58 cephnode03 podman[2451]: 2021-11-12 15:02:58.32025349 +0000
> 
 GMT m=+0.879541154 container init
> 67775626001a63cb6127f670e8c6e0e98d368bac87b037a606008bde9ebd500c 
> (image=quay.io/ceph/
> ceph@sha256:285d2abb6e74bdc6e15e1af585aa19f132045651b7a80eb77b9cec8a785>
> Nov 12 15:02:58 cephnode03 podman[2451]: 2021-11-12 15:02:58.358130824 +0000
> 
 GMT m=+0.917418492 container start
> 67775626001a63cb6127f670e8c6e0e98d368bac87b037a606008bde9ebd500c 
> (image=quay.io/ceph/
> ceph@sha256:285d2abb6e74bdc6e15e1af585aa19f132045651b7a80eb77b9cec8a7>
> Nov 12 15:02:58 cephnode03 podman[2451]: 2021-11-12 15:02:58.358610898 +0000
> 
 GMT m=+0.917898568 container attach
> 67775626001a63cb6127f670e8c6e0e98d368bac87b037a606008bde9ebd500c 
> (image=quay.io/ceph/
> ceph@sha256:285d2abb6e74bdc6e15e1af585aa19f132045651b7a80eb77b9cec8a>
> Nov 12 15:04:56 cephnode03 systemd[1]: ceph-d82ced8c-1f6e-11ec-
> a2e4-00fd45fcaf9c@osd.8.service: start operation timed out. Terminating.
> Nov 12 15:04:57 cephnode03 podman[2880]: 2021-11-12 15:04:57.172277378 +0000
> 
 GMT m=+0.192406841 container create
> 52129775b66c9414d5c237a3edf4a45b1d6594899b7439a65a785d93f41bf056 
> (image=quay.io/ceph/
> ceph@sha256:285d2abb6e74bdc6e15e1af585aa19f132045651b7a80eb77b9cec8a>
> Nov 12 15:04:57 cephnode03 podman[2880]: 2021-11-12 15:04:57.282396807 +0000
> 
 GMT m=+0.302526273 container init
> 52129775b66c9414d5c237a3edf4a45b1d6594899b7439a65a785d93f41bf056 
> (image=quay.io/ceph/
> ceph@sha256:285d2abb6e74bdc6e15e1af585aa19f132045651b7a80eb77b9cec8a78>
> Nov 12 15:04:57 cephnode03 podman[2880]: 2021-11-12 15:04:57.307421739 +0000
> 
 GMT m=+0.327551171 container start
> 52129775b66c9414d5c237a3edf4a45b1d6594899b7439a65a785d93f41bf056 
> (image=quay.io/ceph/
> ceph@sha256:285d2abb6e74bdc6e15e1af585aa19f132045651b7a80eb77b9cec8a7>
> Nov 12 15:04:57 cephnode03 podman[2880]: 2021-11-12 15:04:57.307885433 +0000
> 
 GMT m=+0.328014865 container attach
> 52129775b66c9414d5c237a3edf4a45b1d6594899b7439a65a785d93f41bf056 
> (image=quay.io/ceph/
> ceph@sha256:285d2abb6e74bdc6e15e1af585aa19f132045651b7a80eb77b9cec8a>
> Nov 12 15:04:58 cephnode03 podman[2880]: 2021-11-12 15:04:58.12821031 +0000
> 
 GMT m=+1.148339771 container died
> 52129775b66c9414d5c237a3edf4a45b1d6594899b7439a65a785d93f41bf056 
> (image=quay.io/ceph/
> ceph@sha256:285d2abb6e74bdc6e15e1af585aa19f132045651b7a80eb77b9cec8a785>
> Nov 12 15:04:58 cephnode03 podman[2880]: 2021-11-12 15:04:58.25156536 +0000
> 
 GMT m=+1.271694782 container remove
> 52129775b66c9414d5c237a3edf4a45b1d6594899b7439a65a785d93f41bf056 
> (image=quay.io/ceph/
> ceph@sha256:285d2abb6e74bdc6e15e1af585aa19f132045651b7a80eb77b9cec8a7>
> Nov 12 15:04:58 cephnode03 systemd[1]: ceph-d82ced8c-1f6e-11ec-
> a2e4-00fd45fcaf9c@osd.8.service: Failed with result 'timeout'.
> Nov 12 15:04:58 cephnode03 systemd[1]: ceph-d82ced8c-1f6e-11ec-
> a2e4-00fd45fcaf9c@osd.8.service: Unit process 2094 (bash) remains running 
> after unit stopped.
> Nov 12 15:04:58 cephnode03 systemd[1]: ceph-d82ced8c-1f6e-11ec-
> a2e4-00fd45fcaf9c@osd.8.service: Unit process 2451 (podman) remains running
> 
 after unit stopped.
> Nov 12 15:04:58 cephnode03 systemd[1]: ceph-d82ced8c-1f6e-11ec-
> a2e4-00fd45fcaf9c@osd.8.service: Unit process 2528 (conmon) remains running
> 
 after unit stopped.
> Nov 12 15:04:58 cephnode03 systemd[1]: ceph-d82ced8c-1f6e-11ec-
> a2e4-00fd45fcaf9c@osd.8.service: Unit process 2983 (conmon) remains running
> 
 after unit stopped.
> Nov 12 15:04:58 cephnode03 systemd[1]: ceph-d82ced8c-1f6e-11ec-
> a2e4-00fd45fcaf9c@osd.8.service: Unit process 3020 (podman) remains running
> 
 after unit stopped.
> Nov 12 15:04:58 cephnode03 systemd[1]: ceph-d82ced8c-1f6e-11ec-
> a2e4-00fd45fcaf9c@osd.8.service: Unit process 3061 (podman) remains running
> 
 after unit stopped.
> Nov 12 15:04:58 cephnode03 systemd[1]: Failed to start Ceph osd.8 for 
> d82ced8c-1f6e-11ec-a2e4-00fd45fcaf9c.
> Nov 12 15:04:58 cephnode03 systemd[1]: ceph-d82ced8c-1f6e-11ec-
> a2e4-00fd45fcaf9c@osd.8.service: Consumed 1.723s CPU time.
> Nov 12 15:05:08 cephnode03 systemd[1]: ceph-d82ced8c-1f6e-11ec-
> a2e4-00fd45fcaf9c@osd.8.service: Scheduled restart job, restart counter is
> at 
 2.
> Nov 12 15:05:08 cephnode03 systemd[1]: Stopped Ceph osd.8 for 
> d82ced8c-1f6e-11ec-a2e4-00fd45fcaf9c.
> Nov 12 15:05:08 cephnode03 systemd[1]: ceph-d82ced8c-1f6e-11ec-
> a2e4-00fd45fcaf9c@osd.8.service: Consumed 1.755s CPU time.
> Nov 12 15:05:08 cephnode03 systemd[1]: ceph-d82ced8c-1f6e-11ec-
> a2e4-00fd45fcaf9c@osd.8.service: Found left-over process 2094 (bash) in 
> control group while starting unit. Ignoring.
> Nov 12 15:05:08 cephnode03 systemd[1]: This usually indicates unclean 
> termination of a previous run, or service implementation deficiencies.
> Nov 12 15:05:08 cephnode03 systemd[1]: ceph-d82ced8c-1f6e-11ec-
> a2e4-00fd45fcaf9c@osd.8.service: Found left-over process 2451 (podman) in 
> control group while starting unit. Ignoring.
> Nov 12 15:05:08 cephnode03 systemd[1]: This usually indicates unclean 
> termination of a previous run, or service implementation deficiencies.
> Nov 12 15:05:08 cephnode03 systemd[1]: ceph-d82ced8c-1f6e-11ec-
> a2e4-00fd45fcaf9c@osd.8.service: Found left-over process 2528 (conmon) in 
> control group while starting unit. Ignoring.
> Nov 12 15:05:08 cephnode03 systemd[1]: This usually indicates unclean 
> termination of a previous run, or service implementation deficiencies.
> Nov 12 15:05:08 cephnode03 systemd[1]: Starting Ceph osd.8 for 
> d82ced8c-1f6e-11ec-a2e4-00fd45fcaf9c...
> Nov 12 15:05:08 cephnode03 systemd[1]: ceph-d82ced8c-1f6e-11ec-
> a2e4-00fd45fcaf9c@osd.8.service: Found left-over process 2094 (bash) in 
> control group while starting unit. Ignoring.
> Nov 12 15:05:08 cephnode03 systemd[1]: This usually indicates unclean 
> termination of a previous run, or service implementation deficiencies.
> Nov 12 15:05:08 cephnode03 systemd[1]: ceph-d82ced8c-1f6e-11ec-
> a2e4-00fd45fcaf9c@osd.8.service: Found left-over process 2451 (podman) in 
> control group while starting unit. Ignoring.
> Nov 12 15:05:08 cephnode03 systemd[1]: This usually indicates unclean 
> termination of a previous run, or service implementation deficiencies.
> Nov 12 15:05:08 cephnode03 systemd[1]: ceph-d82ced8c-1f6e-11ec-
> a2e4-00fd45fcaf9c@osd.8.service: Found left-over process 2528 (conmon) in 
> control group while starting unit. Ignoring.
> Nov 12 15:05:08 cephnode03 systemd[1]: This usually indicates unclean 
> termination of a previous run, or service implementation deficiencies.
> Nov 12 15:05:08 cephnode03 podman[3170]: 2021-11-12 15:05:08.754453546 +0000
> 
 GMT m=+0.299183916 container stop
> 67775626001a63cb6127f670e8c6e0e98d368bac87b037a606008bde9ebd500c 
> (image=quay.io/ceph/
> ceph@sha256:285d2abb6e74bdc6e15e1af585aa19f132045651b7a80eb77b9cec8a78>
> Nov 12 15:05:08 cephnode03 podman[3170]: 2021-11-12 15:05:08.783446549 +0000
> 
 GMT m=+0.328176890 container died
> 67775626001a63cb6127f670e8c6e0e98d368bac87b037a606008bde9ebd500c 
> (image=quay.io/ceph/
> ceph@sha256:285d2abb6e74bdc6e15e1af585aa19f132045651b7a80eb77b9cec8a78>
> Nov 12 15:05:08 cephnode03 podman[3170]: 2021-11-12 15:05:08.851135864 +0000
> 
 GMT m=+0.395866217 container remove
> 67775626001a63cb6127f670e8c6e0e98d368bac87b037a606008bde9ebd500c 
> (image=quay.io/ceph/
> ceph@sha256:285d2abb6e74bdc6e15e1af585aa19f132045651b7a80eb77b9cec8a>
> Nov 12 15:05:08 cephnode03 bash[3170]: 
> 67775626001a63cb6127f670e8c6e0e98d368bac87b037a606008bde9ebd500c
> Nov 12 15:05:09 cephnode03 podman[3479]: 2021-11-12 15:05:09.80276864 +0000
> 
 GMT m=+0.386506470 container create
> 6dd198dfeeb73916004ce854540b26477c8f8f35f61b98c2fdbabc09541a8553 
> (image=quay.io/ceph/
> ceph@sha256:285d2abb6e74bdc6e15e1af585aa19f132045651b7a80eb77b9cec8a7>
> Nov 12 15:05:10 cephnode03 podman[3479]: 2021-11-12 15:05:10.039849134 +0000
> 
 GMT m=+0.623586958 container init
> 6dd198dfeeb73916004ce854540b26477c8f8f35f61b98c2fdbabc09541a8553 
> (image=quay.io/ceph/
> ceph@sha256:285d2abb6e74bdc6e15e1af585aa19f132045651b7a80eb77b9cec8a78>
> Nov 12 15:05:10 cephnode03 podman[3479]: 2021-11-12 15:05:10.16202613 +0000
> 
 GMT m=+0.745763967 container start
> 6dd198dfeeb73916004ce854540b26477c8f8f35f61b98c2fdbabc09541a8553 
> (image=quay.io/ceph/
> ceph@sha256:285d2abb6e74bdc6e15e1af585aa19f132045651b7a80eb77b9cec8a78>
> Nov 12 15:05:10 cephnode03 podman[3479]: 2021-11-12 15:05:10.162524955 +0000
> 
 GMT m=+0.746262780 container attach
> 6dd198dfeeb73916004ce854540b26477c8f8f35f61b98c2fdbabc09541a8553 
> (image=quay.io/ceph/
> ceph@sha256:285d2abb6e74bdc6e15e1af585aa19f132045651b7a80eb77b9cec8a>
> Nov 12 15:07:08 cephnode03 systemd[1]: ceph-d82ced8c-1f6e-11ec-
> a2e4-00fd45fcaf9c@osd.8.service: start operation timed out. Terminating.
> Nov 12 15:07:09 cephnode03 podman[3943]: 2021-11-12 15:07:09.144930971 +0000
> 
 GMT m=+0.190925929 container create
> 043e7d63f534a50503de2ba07c81f0f803d10dbdba4c8b5447789e2f31326a23 
> (image=quay.io/ceph/
> ceph@sha256:285d2abb6e74bdc6e15e1af585aa19f132045651b7a80eb77b9cec8a>
> Nov 12 15:07:09 cephnode03 podman[3943]: 2021-11-12 15:07:09.254707415 +0000
> 
 GMT m=+0.300702370 container init
> 043e7d63f534a50503de2ba07c81f0f803d10dbdba4c8b5447789e2f31326a23 
> (image=quay.io/ceph/
> ceph@sha256:285d2abb6e74bdc6e15e1af585aa19f132045651b7a80eb77b9cec8a78>
> Nov 12 15:07:09 cephnode03 podman[3943]: 2021-11-12 15:07:09.295296356 +0000
> 
 GMT m=+0.341291325 container start
> 043e7d63f534a50503de2ba07c81f0f803d10dbdba4c8b5447789e2f31326a23 
> (image=quay.io/ceph/
> ceph@sha256:285d2abb6e74bdc6e15e1af585aa19f132045651b7a80eb77b9cec8a7>
> Nov 12 15:07:09 cephnode03 podman[3943]: 2021-11-12 15:07:09.295479945 +0000
> 
 GMT m=+0.341474869 container attach
> 043e7d63f534a50503de2ba07c81f0f803d10dbdba4c8b5447789e2f31326a23 
> (image=quay.io/ceph/
> ceph@sha256:285d2abb6e74bdc6e15e1af585aa19f132045651b7a80eb77b9cec8a>
> Nov 12 15:07:10 cephnode03 podman[3943]: 2021-11-12 15:07:10.105620642 +0000
> 
 GMT m=+1.151615597 container died
> 043e7d63f534a50503de2ba07c81f0f803d10dbdba4c8b5447789e2f31326a23 
> (image=quay.io/ceph/
> ceph@sha256:285d2abb6e74bdc6e15e1af585aa19f132045651b7a80eb77b9cec8a78>
> Nov 12 15:07:10 cephnode03 podman[3943]: 2021-11-12 15:07:10.329322681 +0000
> 
 GMT m=+1.375317649 container remove
> 043e7d63f534a50503de2ba07c81f0f803d10dbdba4c8b5447789e2f31326a23 
> (image=quay.io/ceph/
> ceph@sha256:285d2abb6e74bdc6e15e1af585aa19f132045651b7a80eb77b9cec8a>
> Nov 12 15:07:10 cephnode03 systemd[1]: ceph-d82ced8c-1f6e-11ec-
> a2e4-00fd45fcaf9c@osd.8.service: Failed with result 'timeout'.
> Nov 12 15:07:10 cephnode03 systemd[1]: ceph-d82ced8c-1f6e-11ec-
> a2e4-00fd45fcaf9c@osd.8.service: Unit process 3111 (bash) remains running 
> after unit stopped.
> Nov 12 15:07:10 cephnode03 systemd[1]: ceph-d82ced8c-1f6e-11ec-
> a2e4-00fd45fcaf9c@osd.8.service: Unit process 3479 (podman) remains running
> 
 after unit stopped.
> Nov 12 15:07:10 cephnode03 systemd[1]: ceph-d82ced8c-1f6e-11ec-
> a2e4-00fd45fcaf9c@osd.8.service: Unit process 3581 (conmon) remains running
> 
 after unit stopped.
> Nov 12 15:07:10 cephnode03 systemd[1]: ceph-d82ced8c-1f6e-11ec-
> a2e4-00fd45fcaf9c@osd.8.service: Unit process 4041 (conmon) remains running
> 
 after unit stopped.
> Nov 12 15:07:10 cephnode03 systemd[1]: ceph-d82ced8c-1f6e-11ec-
> a2e4-00fd45fcaf9c@osd.8.service: Unit process 4084 (podman) remains running
> 
 after unit stopped.
> Nov 12 15:07:10 cephnode03 systemd[1]: Failed to start Ceph osd.8 for 
> d82ced8c-1f6e-11ec-a2e4-00fd45fcaf9c.
> Nov 12 15:07:10 cephnode03 systemd[1]: ceph-d82ced8c-1f6e-11ec-
> a2e4-00fd45fcaf9c@osd.8.service: Consumed 1.540s CPU time.
> Nov 12 15:07:20 cephnode03 systemd[1]: ceph-d82ced8c-1f6e-11ec-
> a2e4-00fd45fcaf9c@osd.8.service: Scheduled restart job, restart counter is
> at 
 3.
> Nov 12 15:07:20 cephnode03 systemd[1]: Stopped Ceph osd.8 for 
> d82ced8c-1f6e-11ec-a2e4-00fd45fcaf9c.
> Nov 12 15:07:20 cephnode03 systemd[1]: ceph-d82ced8c-1f6e-11ec-
> a2e4-00fd45fcaf9c@osd.8.service: Consumed 1.572s CPU time.
> Nov 12 15:07:20 cephnode03 systemd[1]: ceph-d82ced8c-1f6e-11ec-
> a2e4-00fd45fcaf9c@osd.8.service: Found left-over process 3111 (bash) in 
> control group while starting unit. Ignoring.
> Nov 12 15:07:20 cephnode03 systemd[1]: This usually indicates unclean 
> termination of a previous run, or service implementation deficiencies.
> Nov 12 15:07:20 cephnode03 systemd[1]: ceph-d82ced8c-1f6e-11ec-
> a2e4-00fd45fcaf9c@osd.8.service: Found left-over process 3479 (podman) in 
> control group while starting unit. Ignoring.
> Nov 12 15:07:20 cephnode03 systemd[1]: This usually indicates unclean 
> termination of a previous run, or service implementation deficiencies.
> Nov 12 15:07:20 cephnode03 systemd[1]: ceph-d82ced8c-1f6e-11ec-
> a2e4-00fd45fcaf9c@osd.8.service: Found left-over process 3581 (conmon) in 
> control group while starting unit. Ignoring.
> Nov 12 15:07:20 cephnode03 systemd[1]: This usually indicates unclean 
> termination of a previous run, or service implementation deficiencies.
> Nov 12 15:07:20 cephnode03 systemd[1]: Starting Ceph osd.8 for 
> d82ced8c-1f6e-11ec-a2e4-00fd45fcaf9c...
> Nov 12 15:07:20 cephnode03 systemd[1]: ceph-d82ced8c-1f6e-11ec-
> a2e4-00fd45fcaf9c@osd.8.service: Found left-over process 3111 (bash) in 
> control group while starting unit. Ignoring.
> Nov 12 15:07:20 cephnode03 systemd[1]: This usually indicates unclean 
> termination of a previous run, or service implementation deficiencies.
> Nov 12 15:07:20 cephnode03 systemd[1]: ceph-d82ced8c-1f6e-11ec-
> a2e4-00fd45fcaf9c@osd.8.service: Found left-over process 3479 (podman) in 
> control group while starting unit. Ignoring.
> Nov 12 15:07:20 cephnode03 systemd[1]: This usually indicates unclean 
> termination of a previous run, or service implementation deficiencies.
> Nov 12 15:07:20 cephnode03 systemd[1]: ceph-d82ced8c-1f6e-11ec-
> a2e4-00fd45fcaf9c@osd.8.service: Found left-over process 3581 (conmon) in 
> control group while starting unit. Ignoring.
> Nov 12 15:07:20 cephnode03 systemd[1]: This usually indicates unclean 
> termination of a previous run, or service implementation deficiencies.
> Nov 12 15:07:20 cephnode03 podman[4312]: 2021-11-12 15:07:20.935350131 +0000
> 
 GMT m=+0.151259617 container remove
> 6dd198dfeeb73916004ce854540b26477c8f8f35f61b98c2fdbabc09541a8553 
> (image=quay.io/ceph/
> ceph@sha256:285d2abb6e74bdc6e15e1af585aa19f132045651b7a80eb77b9cec8a>
> Nov 12 15:07:21 cephnode03 podman[4504]: 2021-11-12 15:07:21.813813737 +0000
> 
 GMT m=+0.344842861 container create
> 3ccc8458beb0820484c501621bec99011c5af24744fdfebd47b11f342a47fb72 
> (image=quay.io/ceph/
> ceph@sha256:285d2abb6e74bdc6e15e1af585aa19f132045651b7a80eb77b9cec8a>
> Nov 12 15:07:21 cephnode03 bash[3479]: time="2021-11-12T15:07:21Z"
> level=error 
 msg="Cannot get exit code: died not found: unable to find
> event"
> Nov 12 15:07:21 cephnode03 podman[4504]: 2021-11-12 15:07:21.957863052 +0000
> 
 GMT m=+0.488892169 container init
> 3ccc8458beb0820484c501621bec99011c5af24744fdfebd47b11f342a47fb72 
> (image=quay.io/ceph/
> ceph@sha256:285d2abb6e74bdc6e15e1af585aa19f132045651b7a80eb77b9cec8a78>
> Nov 12 15:07:21 cephnode03 podman[4504]: 2021-11-12 15:07:21.986447108 +0000
> 
 GMT m=+0.517476234 container start
> 3ccc8458beb0820484c501621bec99011c5af24744fdfebd47b11f342a47fb72 
> (image=quay.io/ceph/
> ceph@sha256:285d2abb6e74bdc6e15e1af585aa19f132045651b7a80eb77b9cec8a7>
> Nov 12 15:07:21 cephnode03 podman[4504]: 2021-11-12 15:07:21.986574131 +0000
> 
 GMT m=+0.517603271 container attach
> 3ccc8458beb0820484c501621bec99011c5af24744fdfebd47b11f342a47fb72 
> (image=quay.io/ceph/
> ceph@sha256:285d2abb6e74bdc6e15e1af585aa19f132045651b7a80eb77b9cec8a>
> Nov 12 15:09:20 cephnode03 systemd[1]: ceph-d82ced8c-1f6e-11ec-
> a2e4-00fd45fcaf9c@osd.8.service: start operation timed out. Terminating.
> Nov 12 15:09:21 cephnode03 podman[5007]: 2021-11-12 15:09:21.481588083 +0000
> 
 GMT m=+0.222154705 container create
> 9ec0bc0e740fd527861edec9b831e3b1bcb5318c8b12530c2d840fd0b2d0ac6b 
> (image=quay.io/ceph/
> ceph@sha256:285d2abb6e74bdc6e15e1af585aa19f132045651b7a80eb77b9cec8a>
> Nov 12 15:09:21 cephnode03 podman[5007]: 2021-11-12 15:09:21.657495698 +0000
> 
 GMT m=+0.398062308 container init
> 9ec0bc0e740fd527861edec9b831e3b1bcb5318c8b12530c2d840fd0b2d0ac6b 
> (image=quay.io/ceph/
> ceph@sha256:285d2abb6e74bdc6e15e1af585aa19f132045651b7a80eb77b9cec8a78>
> Nov 12 15:09:21 cephnode03 podman[5007]: 2021-11-12 15:09:21.669402917 +0000
> 
 GMT m=+0.409969527 container start
> 9ec0bc0e740fd527861edec9b831e3b1bcb5318c8b12530c2d840fd0b2d0ac6b 
> (image=quay.io/ceph/
> ceph@sha256:285d2abb6e74bdc6e15e1af585aa19f132045651b7a80eb77b9cec8a7>
> Nov 12 15:09:21 cephnode03 podman[5007]: 2021-11-12 15:09:21.669590773 +0000
> 
 GMT m=+0.410157368 container attach
> 9ec0bc0e740fd527861edec9b831e3b1bcb5318c8b12530c2d840fd0b2d0ac6b 
> (image=quay.io/ceph/
> ceph@sha256:285d2abb6e74bdc6e15e1af585aa19f132045651b7a80eb77b9cec8a>
> Nov 12 15:09:22 cephnode03 podman[5007]: 2021-11-12 15:09:22.532873483 +0000
> 
 GMT m=+1.273440122 container died
> 9ec0bc0e740fd527861edec9b831e3b1bcb5318c8b12530c2d840fd0b2d0ac6b 
> (image=quay.io/ceph/
> ceph@sha256:285d2abb6e74bdc6e15e1af585aa19f132045651b7a80eb77b9cec8a78>
> Nov 12 15:09:22 cephnode03 podman[5007]: 2021-11-12 15:09:22.664820144 +0000
> 
 GMT m=+1.405386768 container remove
> 9ec0bc0e740fd527861edec9b831e3b1bcb5318c8b12530c2d840fd0b2d0ac6b 
> (image=quay.io/ceph/
> ceph@sha256:285d2abb6e74bdc6e15e1af585aa19f132045651b7a80eb77b9cec8a>
> Nov 12 15:09:22 cephnode03 systemd[1]: ceph-d82ced8c-1f6e-11ec-
> a2e4-00fd45fcaf9c@osd.8.service: Failed with result 'timeout'.
> Nov 12 15:09:22 cephnode03 systemd[1]: ceph-d82ced8c-1f6e-11ec-
> a2e4-00fd45fcaf9c@osd.8.service: Unit process 4173 (bash) remains running 
> after unit stopped.
> Nov 12 15:09:22 cephnode03 systemd[1]: ceph-d82ced8c-1f6e-11ec-
> a2e4-00fd45fcaf9c@osd.8.service: Unit process 4504 (podman) remains running
> 
 after unit stopped.
> Nov 12 15:09:22 cephnode03 systemd[1]: ceph-d82ced8c-1f6e-11ec-
> a2e4-00fd45fcaf9c@osd.8.service: Unit process 4622 (conmon) remains running
> 
 after unit stopped.
> Nov 12 15:09:22 cephnode03 systemd[1]: Failed to start Ceph osd.8 for 
> d82ced8c-1f6e-11ec-a2e4-00fd45fcaf9c.
> Nov 12 15:09:22 cephnode03 systemd[1]: ceph-d82ced8c-1f6e-11ec-
> a2e4-00fd45fcaf9c@osd.8.service: Consumed 1.560s CPU time.
> Nov 12 15:09:32 cephnode03 systemd[1]: ceph-d82ced8c-1f6e-11ec-
> a2e4-00fd45fcaf9c@osd.8.service: Scheduled restart job, restart counter is
> at 
 4.
> Nov 12 15:09:32 cephnode03 systemd[1]: Stopped Ceph osd.8 for 
> d82ced8c-1f6e-11ec-a2e4-00fd45fcaf9c.
> Nov 12 15:09:32 cephnode03 systemd[1]: ceph-d82ced8c-1f6e-11ec-
> a2e4-00fd45fcaf9c@osd.8.service: Consumed 1.567s CPU time.
> Nov 12 15:09:32 cephnode03 systemd[1]: ceph-d82ced8c-1f6e-11ec-
> a2e4-00fd45fcaf9c@osd.8.service: Found left-over process 4173 (bash) in 
> control group while starting unit. Ignoring.
> Nov 12 15:09:32 cephnode03 systemd[1]: This usually indicates unclean 
> termination of a previous run, or service implementation deficiencies.
> Nov 12 15:09:32 cephnode03 systemd[1]: ceph-d82ced8c-1f6e-11ec-
> a2e4-00fd45fcaf9c@osd.8.service: Found left-over process 4504 (podman) in 
> control group while starting unit. Ignoring.
> Nov 12 15:09:32 cephnode03 systemd[1]: This usually indicates unclean 
> termination of a previous run, or service implementation deficiencies.
> Nov 12 15:09:32 cephnode03 systemd[1]: ceph-d82ced8c-1f6e-11ec-
> a2e4-00fd45fcaf9c@osd.8.service: Found left-over process 4622 (conmon) in 
> control group while starting unit. Ignoring.
> Nov 12 15:09:32 cephnode03 systemd[1]: This usually indicates unclean 
> termination of a previous run, or service implementation deficiencies.
> Nov 12 15:09:32 cephnode03 systemd[1]: Starting Ceph osd.8 for 
> d82ced8c-1f6e-11ec-a2e4-00fd45fcaf9c...
> Nov 12 15:09:32 cephnode03 systemd[1]: ceph-d82ced8c-1f6e-11ec-
> a2e4-00fd45fcaf9c@osd.8.service: Found left-over process 4173 (bash) in 
> control group while starting unit. Ignoring.
> Nov 12 15:09:32 cephnode03 systemd[1]: This usually indicates unclean 
> termination of a previous run, or service implementation deficiencies.
> Nov 12 15:09:32 cephnode03 systemd[1]: ceph-d82ced8c-1f6e-11ec-
> a2e4-00fd45fcaf9c@osd.8.service: Found left-over process 4504 (podman) in 
> control group while starting unit. Ignoring.
> Nov 12 15:09:32 cephnode03 systemd[1]: This usually indicates unclean 
> termination of a previous run, or service implementation deficiencies.
> Nov 12 15:09:32 cephnode03 systemd[1]: ceph-d82ced8c-1f6e-11ec-
> a2e4-00fd45fcaf9c@osd.8.service: Found left-over process 4622 (conmon) in 
> control group while starting unit. Ignoring.
> Nov 12 15:09:32 cephnode03 systemd[1]: This usually indicates unclean 
> termination of a previous run, or service implementation deficiencies.
> Nov 12 15:09:33 cephnode03 podman[5319]: 2021-11-12 15:09:33.046838444 +0000
> 
 GMT m=+0.214927733 container stop
> 3ccc8458beb0820484c501621bec99011c5af24744fdfebd47b11f342a47fb72 
> (image=quay.io/ceph/
> ceph@sha256:285d2abb6e74bdc6e15e1af585aa19f132045651b7a80eb77b9cec8a78>
> Nov 12 15:09:33 cephnode03 podman[5319]: 2021-11-12 15:09:33.085449517 +0000
> 
 GMT m=+0.253538787 container died
> 3ccc8458beb0820484c501621bec99011c5af24744fdfebd47b11f342a47fb72 
> (image=quay.io/ceph/
> ceph@sha256:285d2abb6e74bdc6e15e1af585aa19f132045651b7a80eb77b9cec8a78>
> Nov 12 15:09:33 cephnode03 podman[5319]: 2021-11-12 15:09:33.23814693 +0000
> 
 GMT m=+0.406236223 container remove
> 3ccc8458beb0820484c501621bec99011c5af24744fdfebd47b11f342a47fb72 
> (image=quay.io/ceph/
> ceph@sha256:285d2abb6e74bdc6e15e1af585aa19f132045651b7a80eb77b9cec8a7>
> Nov 12 15:09:33 cephnode03 bash[5319]: 
> 3ccc8458beb0820484c501621bec99011c5af24744fdfebd47b11f342a47fb72
> Nov 12 15:09:34 cephnode03 podman[5606]: 2021-11-12 15:09:34.14523469 +0000
> 
 GMT m=+0.442976747 container create
> 30d3afe7236ae8f5553ed3ad108596b22a74ab17fb132ccf433caba7ca959de2 
> (image=quay.io/ceph/
> ceph@sha256:285d2abb6e74bdc6e15e1af585aa19f132045651b7a80eb77b9cec8a7>
> Nov 12 15:09:34 cephnode03 podman[5606]: 2021-11-12 15:09:34.329560878 +0000
> 
 GMT m=+0.627302934 container init
> 30d3afe7236ae8f5553ed3ad108596b22a74ab17fb132ccf433caba7ca959de2 
> (image=quay.io/ceph/
> ceph@sha256:285d2abb6e74bdc6e15e1af585aa19f132045651b7a80eb77b9cec8a78>
> Nov 12 15:09:34 cephnode03 podman[5606]: 2021-11-12 15:09:34.422080643 +0000
> 
 GMT m=+0.719822670 container start
> 30d3afe7236ae8f5553ed3ad108596b22a74ab17fb132ccf433caba7ca959de2 
> (image=quay.io/ceph/
> ceph@sha256:285d2abb6e74bdc6e15e1af585aa19f132045651b7a80eb77b9cec8a7>
> 
> I am trying to understand what is happening. Any suggestions are most
> welcome.
 
> Thanks,
> 
> Stephen.
> 
> 
> 
> 
> On Saturday, 13 November 2021 14:38:28 GMT 胡 玮文 wrote:
> 
> > Hi Stephen,
> > 
> > I think this output you posted is pretty normal, there is no systemd in
> > the
 container, thus the error.
> 
>  
> 
> > You still need to find the logs. You may try “sudo cephadm logs --name
> > osd.0”. If that still fails, you should try to run the ceph-osd daemon
> > manually.
> 
>  
> 
> > Weiwen Hu
> > 
> > 发件人: Stephen J. Thompson<mailto:stephen@xxxxxxxxxxxxxxxxxxxxx>
> > 发送时间: 2021年11月13日 20:19
> > 收件人: ceph-users@xxxxxxx<mailto:ceph-users@xxxxxxx>
> > 抄送: Stephen J. Thompson<mailto:stephen@xxxxxxxxxxxxxxxxxxxxx>
> > 主题:  Re: OSDs not starting up
> > 
> > Hello all,
> > 
> > I am still digging into this.  I disabled this osd from starting at boot,
> > then
> 
>  rebooted the node.
> 
> > 
> > I then tried doing the following:
> > 
> > sudo cephadm shell
> > 
> > And the following was the result. To me it seems to indicate that the OSD
> > drive is ok and can be decrypted but does still not run.
> > 
> > Inferring fsid d82ced8c-1f6e-11ec-a2e4-00fd45fcaf9c
> > Using recent ceph image quay.io/ceph/
> > ceph@sha256:285d2abb6e74bdc6e15e1af585aa19f132045651b7a80eb77b9cec8a785ff3
> > 30
 
> 
>  root@cephnode02:/# ceph-volume lvm activate 0
> 
> > 2f5d17d3-3308-4f42-867c-7ec8639bde18
> > Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
> > Running command: /usr/bin/ceph-authtool /var/lib/ceph/osd/ceph-0/
> > lockbox.keyring --create-keyring --name client.osd-lockbox.
> > 2f5d17d3-3308-4f42-867c-7ec8639bde18 --add-key
> > AQDNzFFh11ooIRAAru8xYXAXL1/n/
> 
>  75AOJm2KA==
> 
> >  stdout: creating /var/lib/ceph/osd/ceph-0/lockbox.keyring
> > 
> > added entity client.osd-lockbox.2f5d17d3-3308-4f42-867c-7ec8639bde18
> > auth(key=AQDNzFFh11ooIRAAru8xYXAXL1/n/75AOJm2KA==)
> > Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
> > lockbox.keyring
> > Running command: /usr/bin/ceph --cluster ceph --name client.osd-lockbox.
> > 2f5d17d3-3308-4f42-867c-7ec8639bde18 --keyring /var/lib/ceph/osd/ceph-0/
> > lockbox.keyring config-key get dm-crypt/osd/
> > 2f5d17d3-3308-4f42-867c-7ec8639bde18/luks
> > Running command: /usr/sbin/cryptsetup --key-file - --allow-discards
> > luksOpen
 /
> 
>  dev/ceph-779b00ee-dbb5-4542-859c-b6338b01efe0/osd-
> 
> > block-2f5d17d3-3308-4f42-867c-7ec8639bde18
> > 6oP2t6-fuMx-25XU-KiIt-A28Q-fzng-
 K7ouf6
> > Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
> > Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph
> > prime-osd-dir
> > --
> 
>  dev /dev/mapper/6oP2t6-fuMx-25XU-KiIt-A28Q-fzng-K7ouf6 --path
> 
> > /var/lib/ceph/ osd/ceph-0 --no-mon-config
> > Running command: /usr/bin/ln -snf
> > /dev/mapper/6oP2t6-fuMx-25XU-KiIt-A28Q-fzng-
> 
>  K7ouf6
> 
> > /var/lib/ceph/osd/ceph-0/block
> > Running command: /usr/bin/chown -h ceph:ceph
> > /var/lib/ceph/osd/ceph-0/block
 Running command: /usr/bin/chown -R
> > ceph:ceph /dev/dm-7
> > Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
> > Running command: /usr/bin/systemctl enable ceph-
> > volume@lvm-0-2f5d17d3-3308-4f42-867c-7ec8639bde18
> > 
> >  stderr: Created symlink
> >  /etc/systemd/system/multi-user.target.wants/ceph-
> > 
> > volume@lvm-0-2f5d17d3-3308-4f42-867c-7ec8639bde18.service ->
> > /usr/lib/systemd/
> 
>  system/ceph-volume@.service.
> 
> > Running command: /usr/bin/systemctl enable --runtime ceph-osd@0
> > 
> >  stderr: Created symlink /run/systemd/system/ceph-osd.target.wants/ceph-
> > 
> > osd@0.service -> /usr/lib/systemd/system/ceph-osd@.service.
> > Running command: /usr/bin/systemctl start ceph-osd@0
> > 
> >  stderr: Failed to connect to bus: No such file or directory
> > 
> > -->  RuntimeError: command returned non-zero exit status: 1
> > 
> > Any ideas?
> > 
> > Thanks,
> > 
> > Stephen
> > 
> > 
> > 
> > On Friday, 12 November 2021 21:32:41 GMT Stephen J. Thompson wrote:
> > 
> > 
> > > Hello Igor,
> > >
> > >
> > >
> > >
> > >
> > > The OSD logs a empty.
> > >
> > >
> > >
> > >
> > >
> > > Enclosed is the end of the ceph-volume.log
> > >
> > >
> > >
> > >
> > >
> > > Regards,
> > >
> > >
> > >
> > >
> > >
> > > Stephen
> > >
> > >
> > >
> > >
> > >
> > > [2021-11-12 21:23:54,191][ceph_volume.main][INFO  ] Running command:
> > > ceph-
> > > volume  inventory --format=json --filter-for-batch
> > > [2021-11-12 21:23:54,194][ceph_volume.process][INFO  ] Running command:
> > > /usr/ bin/lsblk -plno KNAME,NAME,TYPE
> > > [2021-11-12 21:23:54,202][ceph_volume.process][INFO  ] stdout /dev/sda
> > > /dev/ sda
> > > disk
> > > [2021-11-12 21:23:54,202][ceph_volume.process][INFO  ] stdout /dev/sdb
> > > /dev/ sdb
> > > disk
> > > [2021-11-12 21:23:54,202][ceph_volume.process][INFO  ] stdout /dev/sdc
> > > /dev/ sdc
> > > disk
> > > [2021-11-12 21:23:54,203][ceph_volume.process][INFO  ] stdout /dev/sdd
> > > /dev/ sdd
> > > disk
> > > [2021-11-12 21:23:54,203][ceph_volume.process][INFO  ] stdout /dev/sdd1
> > > /dev/ sdd1
> > > part
> > > [2021-11-12 21:23:54,203][ceph_volume.process][INFO  ] stdout /dev/sdd2
> > > /dev/ sdd2
> > > part
> > > [2021-11-12 21:23:54,203][ceph_volume.process][INFO  ] stdout /dev/sdd5
> > > /dev/ sdd5
> > > part
> > > [2021-11-12 21:23:54,203][ceph_volume.process][INFO  ] stdout /dev/dm-0
> > > /dev/ mapper/ceph--779b00ee--dbb5--4542--859c--b6338b01efe0-osd--
> > > block--2f5d17d3--3308--4f42--867c--7ec8639bde18 lvm
> > > [2021-11-12 21:23:54,203][ceph_volume.process][INFO  ] stdout /dev/dm-1
> > > /dev/ mapper/ceph--41cbaf58--d703--4bae--9095--f1b590b8337b-osd--
> > > block--99e1e1b2--3cfe--48e0--8627--acfe5008c1c5 lvm
> > > [2021-11-12 21:23:54,203][ceph_volume.process][INFO  ] stdout /dev/dm-2
> > > /dev/ mapper/ceph--09c84540--06e3--496f--bf90--45f59748768a-osd--
> > > block--60ad9fb3--8144--4d06--83e6--b90e9d52dcaa lvm
> > > [2021-11-12 21:23:54,203][ceph_volume.process][INFO  ] stdout /dev/dm-3
> > > /dev/ mapper/sda5_crypt
> > > crypt
> > > [2021-11-12 21:23:54,203][ceph_volume.process][INFO  ] stdout /dev/dm-4
> > > /dev/ mapper/cephnode02--vg-root
> > > lvm
> > > [2021-11-12 21:23:54,203][ceph_volume.process][INFO  ] stdout /dev/dm-5
> > > /dev/ mapper/cephnode02--vg-swap_1
> > > lvm
> > > [2021-11-12 21:23:54,203][ceph_volume.process][INFO  ] stdout /dev/dm-6
> > > /dev/ mapper/fwR6Nz-3DbF-Ac2K-jhVp-DtsV-A1mt-VMUefw
> > > crypt
> > > [2021-11-12 21:23:54,210][ceph_volume.process][INFO  ] Running command:
> > > /usr/ sbin/lvs --noheadings --readonly --separator=";" -a --units=b
> > > --nosuffix -S lv_path=/dev/sda -o
> > > lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size [2021-11-12
> > > 21:23:54,910][ceph_volume.process][INFO  ] Running command: /usr/
> > > bin/lsblk
> 
>  --nodeps -P -o
> 
> > > NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,O
> > > WN
> > > ER
> 
>  ,GROUP,MODE,ALIGNMENT,PHY-
> 
> > > SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,DISC-
> > > ZERO,PKNAME,PARTLABEL /dev/sda
> > > [2021-11-12 21:23:54,917][ceph_volume.process][INFO  ] stdout
> > > NAME="sda"
> > > KNAME="sda" MAJ:MIN="8:0" FSTYPE="LVM2_member" MOUNTPOINT="" LABEL=""
> > > UUID="G9naIi-2sgq-FznV-3a5U-8VH6-0G9C-HRQvGf" RO="0" RM="0"
> > > MODEL="TOSHIBA
> > > HDWN180 " SIZE="7.3T" STATE="running" OWNER="root" GROUP="disk"
> > > MODE="brw-
> > > rw----" ALIGNMENT="0" PHY-SEC="4096" LOG-SEC="512" ROTA="1" SCHED="mq-
> > > deadline" TYPE="disk" DISC-ALN="0" DISC-GRAN="0B" DISC-MAX="0B"
> > > DISC-ZERO="0" PKNAME="" PARTLABEL=""
> > > [2021-11-12 21:23:54,918][ceph_volume.process][INFO  ] Running command:
> > > /usr/ sbin/blkid -c /dev/null -p /dev/sda
> > > [2021-11-12 21:23:54,922][ceph_volume.process][INFO  ] stdout /dev/sda:
> > > UUID="G9naIi-2sgq-FznV-3a5U-8VH6-0G9C-HRQvGf" VERSION="LVM2 001"
> > > TYPE="LVM2_member" USAGE="raid"
> > > [2021-11-12 21:23:54,923][ceph_volume.process][INFO  ] Running command:
> > > /usr/ sbin/pvs --noheadings --readonly --units=b --nosuffix
> > > --separator=";"
> 
>  -o
> 
> > > vg_name,pv_count,lv_count,vg_attr,vg_extent_count,vg_free_count,vg_exten
> > > t_
> > > s
> 
>  ize /dev/sda
> 
> > > [2021-11-12 21:23:54,978][ceph_volume.process][INFO  ] stdout
> > > ceph-779b00ee-
> > > dbb5-4542-859c-b6338b01efe0";"1";"1";"wz--n-";"1907721";"0";"4194304
> > > [2021-11-12 21:23:54,979][ceph_volume.process][INFO  ] Running command:
> > > /usr/ sbin/pvs --noheadings --readonly --separator=";" -a --units=b
> > > --nosuffix -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size /dev/sda
> > > [2021-11-12 21:23:55,118][ceph_volume.process][INFO  ] stdout
> > > ceph.block_device=/dev/ceph-779b00ee-dbb5-4542-859c-b6338b01efe0/osd-
> > > block-2f5d17d3-3308-4f42-867c-7ec8639bde18,ceph.block_uuid=6oP2t6-fuMx-2
> > > 5X
> > > U-
> > > KiIt-A28Q-fzng-K7ouf6,ceph.cephx_lockbox_secret=AQDNzFFh11ooIRAAru8xYXAX
> > > L
> > > 1/ n/ 75AOJm2KA==,ceph.cluster_fsid=d82ced8c-1f6e-11ec-
> > > a2e4-00fd45fcaf9c,ceph.cluster_name=ceph,ceph.crush_device_class=None,ce
> > > ph
> > > .e
> > > ncrypted=1,ceph.osd_fsid=2f5d17d3-3308-4f42-867c-7ec8639bde18,ceph.osd_i
> > > d
> > > =0,
> > > ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0";"/
> > > 
> > > dev/ceph-779b00ee-dbb5-4542-859c-b6338b01efe0/osd-
> > > block-2f5d17d3-3308-4f42-867c-7ec8639bde18";"osd-
> > > block-2f5d17d3-3308-4f42-867c-7ec8639bde18";"ceph-779b00ee-dbb5-4542-859
> > > c-
> > > 
> > > b6338b01efe0";"6oP2t6-fuMx-25XU-KiIt-A28Q-fzng-K7ouf6";"8001561821184
> > > [2021-11-12 21:23:55,119][ceph_volume.util.disk][INFO  ] opening device
> > > /dev/ sda to check for BlueStore label
> > > [2021-11-12 21:23:55,119][ceph_volume.process][INFO  ] Running command:
> > > /usr/ sbin/udevadm info --query=property /dev/sda
> > > [2021-11-12 21:23:55,131][ceph_volume.process][INFO  ] stdout
> > > DEVLINKS=/dev/
> 
>  disk/by-path/pci-0000:00:1f.2-ata-1.0
> 
> > > /dev/disk/by-id/wwn-0x500003998c8012f6 /
> > > dev/disk/by-id/ata-TOSHIBA_HDWN180_99RZK0PIFAVG /dev/disk/by-path/
> > > pci-0000:00:1f.2-ata-1 /dev/disk/by-id/lvm-pv-uuid-G9naIi-2sgq-
> > > FznV-3a5U-8VH6-0G9C-HRQvGf
> > > [2021-11-12 21:23:55,131][ceph_volume.process][INFO  ] stdout
> > > DEVNAME=/dev/sda [2021-11-12 21:23:55,131][ceph_volume.process][INFO  ]
> > > stdout DEVPATH=/
> > > devices/pci0000:00/0000:00:1f.2/ata1/host0/target0:0:0/0:0:0:0/block/sda
> > > 
> > > [2021-11-12 21:23:55,132][ceph_volume.process][INFO  ] stdout
> > > DEVTYPE=disk
> > > [2021-11-12 21:23:55,132][ceph_volume.process][INFO  ] stdout ID_ATA=1
> > > [2021-11-12 21:23:55,132][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_DOWNLOAD_MICROCODE=1
> > > [2021-11-12 21:23:55,132][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_APM=1
> > > [2021-11-12 21:23:55,132][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_APM_CURRENT_VALUE=128
> > > [2021-11-12 21:23:55,132][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_APM_ENABLED=1
> > > [2021-11-12 21:23:55,132][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_HPA=1
> > > [2021-11-12 21:23:55,132][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_HPA_ENABLED=1
> > > [2021-11-12 21:23:55,132][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_PM=1
> > > [2021-11-12 21:23:55,132][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_PM_ENABLED=1
> > > [2021-11-12 21:23:55,132][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_SECURITY=1
> > > [2021-11-12 21:23:55,133][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_SECURITY_ENABLED=0
> > > [2021-11-12 21:23:55,133][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_SECURITY_ENHANCED_ERASE_UNIT_MIN=66362
> > > [2021-11-12 21:23:55,133][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_SECURITY_ERASE_UNIT_MIN=66362
> > > [2021-11-12 21:23:55,133][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_SMART=1
> > > [2021-11-12 21:23:55,133][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_SMART_ENABLED=1
> > > [2021-11-12 21:23:55,133][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_ROTATION_RATE_RPM=7200
> > > [2021-11-12 21:23:55,133][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_SATA=1
> 
>  [2021-11-12 21:23:55,133][ceph_volume.process][INFO  ]
> 
> > > stdout
> > > ID_ATA_SATA_SIGNAL_RATE_GEN1=1
> > > [2021-11-12 21:23:55,133][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_SATA_SIGNAL_RATE_GEN2=1
> > > [2021-11-12 21:23:55,133][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_WRITE_CACHE=1
> > > [2021-11-12 21:23:55,133][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_WRITE_CACHE_ENABLED=1
> > > [2021-11-12 21:23:55,134][ceph_volume.process][INFO  ] stdout
> > > ID_BUS=ata
> > > [2021-11-12 21:23:55,134][ceph_volume.process][INFO  ] stdout
> > > ID_FS_TYPE=LVM2_member
> > > [2021-11-12 21:23:55,134][ceph_volume.process][INFO  ] stdout
> > > ID_FS_USAGE=raid [2021-11-12 21:23:55,134][ceph_volume.process][INFO  ]
> > > stdout
> > > ID_FS_UUID=G9naIi-2sgq-FznV-3a5U-8VH6-0G9C-HRQvGf
> > > [2021-11-12 21:23:55,134][ceph_volume.process][INFO  ] stdout
> > > ID_FS_UUID_ENC=G9naIi-2sgq-FznV-3a5U-8VH6-0G9C-HRQvGf
> > > [2021-11-12 21:23:55,134][ceph_volume.process][INFO  ] stdout
> > > ID_FS_VERSION=LVM2 001
> > > [2021-11-12 21:23:55,134][ceph_volume.process][INFO  ] stdout
> > > ID_MODEL=TOSHIBA_HDWN180
> > > [2021-11-12 21:23:55,134][ceph_volume.process][INFO  ] stdout
> > > ID_MODEL_ENC=TOSHIBA\x20HDWN180\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
> > > x2
> > > 0\
> 
>  x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20 [2021-11-12
> 
> > > 21:23:55,134][ceph_volume.process][INFO  ] stdout
> > > ID_PATH=pci-0000:00:1f.2-ata-1.0
> > > [2021-11-12 21:23:55,134][ceph_volume.process][INFO  ] stdout
> > > ID_PATH_ATA_COMPAT=pci-0000:00:1f.2-ata-1
> > > [2021-11-12 21:23:55,134][ceph_volume.process][INFO  ] stdout
> > > ID_PATH_TAG=pci-0000_00_1f_2-ata-1_0
> > > [2021-11-12 21:23:55,134][ceph_volume.process][INFO  ] stdout
> > > ID_REVISION=GX2M [2021-11-12 21:23:55,135][ceph_volume.process][INFO  ]
> > > stdout
> > > ID_SERIAL=TOSHIBA_HDWN180_99RZK0PIFAVG
> > > [2021-11-12 21:23:55,135][ceph_volume.process][INFO  ] stdout
> > > ID_SERIAL_SHORT=99RZK0PIFAVG
> > > [2021-11-12 21:23:55,135][ceph_volume.process][INFO  ] stdout
> > > ID_TYPE=disk
> > > [2021-11-12 21:23:55,135][ceph_volume.process][INFO  ] stdout
> > > ID_WWN=0x500003998c8012f6
> > > [2021-11-12 21:23:55,135][ceph_volume.process][INFO  ] stdout
> > > ID_WWN_WITH_EXTENSION=0x500003998c8012f6
> > > [2021-11-12 21:23:55,135][ceph_volume.process][INFO  ] stdout MAJOR=8
> > > [2021-11-12 21:23:55,135][ceph_volume.process][INFO  ] stdout MINOR=0
> > > [2021-11-12 21:23:55,135][ceph_volume.process][INFO  ] stdout
> > > SUBSYSTEM=block [2021-11-12 21:23:55,135][ceph_volume.process][INFO  ]
> > > stdout SYSTEMD_ALIAS=/ dev/block/8:0
> > > [2021-11-12 21:23:55,135][ceph_volume.process][INFO  ] stdout
> > > SYSTEMD_READY=1 [2021-11-12 21:23:55,135][ceph_volume.process][INFO  ]
> > > stdout
> > > SYSTEMD_WANTS=lvm2-pvscan@8:0.service
> > > [2021-11-12 21:23:55,135][ceph_volume.process][INFO  ] stdout
> > > TAGS=:systemd:
> 
>  [2021-11-12 21:23:55,136][ceph_volume.process][INFO  ]
> 
> > > stdout
> > > USEC_INITIALIZED=4467643
> > > [2021-11-12 21:23:55,136][ceph_volume.process][INFO  ] Running command:
> > > /usr/ sbin/lvs --noheadings --readonly --separator=";" -a --units=b
> > > --nosuffix -S lv_path=/dev/sdb -o
> > > lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size [2021-11-12
> > > 21:23:55,198][ceph_volume.process][INFO  ] Running command: /usr/
> > > bin/lsblk
> 
>  --nodeps -P -o
> 
> > > NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,O
> > > WN
> > > ER
> 
>  ,GROUP,MODE,ALIGNMENT,PHY-
> 
> > > SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,DISC-
> > > ZERO,PKNAME,PARTLABEL /dev/sdb
> > > [2021-11-12 21:23:55,205][ceph_volume.process][INFO  ] stdout
> > > NAME="sdb"
> > > KNAME="sdb" MAJ:MIN="8:16" FSTYPE="LVM2_member" MOUNTPOINT="" LABEL=""
> > > UUID="SOElW1-Wjj1-TV8Q-TKsa-eRBG-gzGr-aCsNdu" RO="0" RM="0" MODEL="WDC
> > > WD30EFRX-68E" SIZE="2.7T" STATE="running" OWNER="root" GROUP="disk"
> > > MODE="brw- rw----" ALIGNMENT="0" PHY-SEC="4096" LOG-SEC="512" ROTA="1"
> > > SCHED="mq- deadline" TYPE="disk" DISC-ALN="0" DISC-GRAN="0B"
> > > DISC-MAX="0B"
> > > DISC-ZERO="0" PKNAME="" PARTLABEL=""
> > > [2021-11-12 21:23:55,205][ceph_volume.process][INFO  ] Running command:
> > > /usr/ sbin/blkid -c /dev/null -p /dev/sdb
> > > [2021-11-12 21:23:55,209][ceph_volume.process][INFO  ] stdout /dev/sdb:
> > > UUID="SOElW1-Wjj1-TV8Q-TKsa-eRBG-gzGr-aCsNdu" VERSION="LVM2 001"
> > > TYPE="LVM2_member" USAGE="raid"
> > > [2021-11-12 21:23:55,209][ceph_volume.process][INFO  ] Running command:
> > > /usr/ sbin/pvs --noheadings --readonly --units=b --nosuffix
> > > --separator=";"
> 
>  -o
> 
> > > vg_name,pv_count,lv_count,vg_attr,vg_extent_count,vg_free_count,vg_exten
> > > t_
> > > s
> 
>  ize /dev/sdb
> 
> > > [2021-11-12 21:23:55,266][ceph_volume.process][INFO  ] stdout
> > > ceph-41cbaf58-
> > > d703-4bae-9095-f1b590b8337b";"1";"1";"wz--n-";"715396";"0";"4194304
> > > [2021-11-12 21:23:55,267][ceph_volume.process][INFO  ] Running command:
> > > /usr/ sbin/pvs --noheadings --readonly --separator=";" -a --units=b
> > > --nosuffix -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size /dev/sdb
> > > [2021-11-12 21:23:55,338][ceph_volume.process][INFO  ] stdout
> > > ceph.block_device=/dev/ceph-41cbaf58-d703-4bae-9095-f1b590b8337b/osd-
> > > block-99e1e1b2-3cfe-48e0-8627-acfe5008c1c5,ceph.block_uuid=fwR6Nz-3DbF-A
> > > c2
> > > K-
> > > jhVp-DtsV-A1mt-VMUefw,ceph.cephx_lockbox_secret=AQAyQ45hPpzqDxAAx2vyba/
> > > tTeQmZl1Sllh8Mg==,ceph.cluster_fsid=d82ced8c-1f6e-11ec-
> > > a2e4-00fd45fcaf9c,ceph.cluster_name=ceph,ceph.crush_device_class=None,ce
> > > ph
> > > .e
> 
>  ncrypted=1,ceph.osd_fsid=99e1e1b2-3cfe-48e0-8627-
> 
> > > acfe5008c1c5,ceph.osd_id=13,ceph.osdspec_affinity=default_drive_group,ce
> > > ph
> > > .
> 
>  type=block,ceph.vdo=0";"/
> 
> > > dev/ceph-41cbaf58-d703-4bae-9095-f1b590b8337b/osd-
> > > block-99e1e1b2-3cfe-48e0-8627-acfe5008c1c5";"osd-
> > > block-99e1e1b2-3cfe-48e0-8627-acfe5008c1c5";"ceph-41cbaf58-d703-4bae-909
> > > 5-
> > > 
> > > f1b590b8337b";"fwR6Nz-3DbF-Ac2K-jhVp-DtsV-A1mt-VMUefw";"3000588304384
> > > [2021-11-12 21:23:55,339][ceph_volume.util.disk][INFO  ] opening device
> > > /dev/ sdb to check for BlueStore label
> > > [2021-11-12 21:23:55,339][ceph_volume.process][INFO  ] Running command:
> > > /usr/ sbin/udevadm info --query=property /dev/sdb
> > > [2021-11-12 21:23:55,351][ceph_volume.process][INFO  ] stdout
> > > DEVLINKS=/dev/
> > > disk/by-id/lvm-pv-uuid-SOElW1-Wjj1-TV8Q-TKsa-eRBG-gzGr-aCsNdu
> > > /dev/disk/by-id/ ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N99Z68Y4
> > > /dev/disk/by-id/
> > > wwn-0x50014ee20a61fcc0 /dev/disk/by-path/pci-0000:00:1f.2-ata-3
> > > /dev/disk/by- path/pci-0000:00:1f.2-ata-3.0
> > > [2021-11-12 21:23:55,351][ceph_volume.process][INFO  ] stdout
> > > DEVNAME=/dev/sdb [2021-11-12 21:23:55,352][ceph_volume.process][INFO  ]
> > > stdout DEVPATH=/
> > > devices/pci0000:00/0000:00:1f.2/ata3/host2/target2:0:0/2:0:0:0/block/sdb
> > > 
> > > [2021-11-12 21:23:55,352][ceph_volume.process][INFO  ] stdout
> > > DEVTYPE=disk
> > > [2021-11-12 21:23:55,352][ceph_volume.process][INFO  ] stdout ID_ATA=1
> > > [2021-11-12 21:23:55,352][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_DOWNLOAD_MICROCODE=1
> > > [2021-11-12 21:23:55,352][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_HPA=1
> > > [2021-11-12 21:23:55,352][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_HPA_ENABLED=1
> > > [2021-11-12 21:23:55,352][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_PM=1
> > > [2021-11-12 21:23:55,352][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_PM_ENABLED=1
> > > [2021-11-12 21:23:55,352][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_PUIS=1
> > > [2021-11-12 21:23:55,352][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_PUIS_ENABLED=0
> > > [2021-11-12 21:23:55,352][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_SECURITY=1
> > > [2021-11-12 21:23:55,353][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_SECURITY_ENABLED=0
> > > [2021-11-12 21:23:55,353][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_SECURITY_ENHANCED_ERASE_UNIT_MIN=420
> > > [2021-11-12 21:23:55,353][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_SECURITY_ERASE_UNIT_MIN=420
> > > [2021-11-12 21:23:55,353][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_SMART=1
> > > [2021-11-12 21:23:55,353][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_SMART_ENABLED=1
> > > [2021-11-12 21:23:55,353][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_ROTATION_RATE_RPM=5400
> > > [2021-11-12 21:23:55,353][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_SATA=1
> 
>  [2021-11-12 21:23:55,353][ceph_volume.process][INFO  ]
> 
> > > stdout
> > > ID_ATA_SATA_SIGNAL_RATE_GEN1=1
> > > [2021-11-12 21:23:55,353][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_SATA_SIGNAL_RATE_GEN2=1
> > > [2021-11-12 21:23:55,353][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_WRITE_CACHE=1
> > > [2021-11-12 21:23:55,353][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_WRITE_CACHE_ENABLED=1
> > > [2021-11-12 21:23:55,354][ceph_volume.process][INFO  ] stdout
> > > ID_BUS=ata
> > > [2021-11-12 21:23:55,354][ceph_volume.process][INFO  ] stdout
> > > ID_FS_TYPE=LVM2_member
> > > [2021-11-12 21:23:55,354][ceph_volume.process][INFO  ] stdout
> > > ID_FS_USAGE=raid [2021-11-12 21:23:55,354][ceph_volume.process][INFO  ]
> > > stdout
> > > ID_FS_UUID=SOElW1-Wjj1-TV8Q-TKsa-eRBG-gzGr-aCsNdu
> > > [2021-11-12 21:23:55,354][ceph_volume.process][INFO  ] stdout
> > > ID_FS_UUID_ENC=SOElW1-Wjj1-TV8Q-TKsa-eRBG-gzGr-aCsNdu
> > > [2021-11-12 21:23:55,354][ceph_volume.process][INFO  ] stdout
> > > ID_FS_VERSION=LVM2 001
> > > [2021-11-12 21:23:55,354][ceph_volume.process][INFO  ] stdout
> > > ID_MODEL=WDC_WD30EFRX-68EUZN0
> > > [2021-11-12 21:23:55,354][ceph_volume.process][INFO  ] stdout
> > > ID_MODEL_ENC=WDC\x20WD30EFRX-68EUZN0\x20\x20\x20\x20\x20\x20\x20\x20\x20
> > > \x
> > > 20
> 
>  \x20\x20\x20\x20\x20\x20\x20\x20\x20\x20 [2021-11-12
> 
> > > 21:23:55,354][ceph_volume.process][INFO  ] stdout
> > > ID_PATH=pci-0000:00:1f.2-ata-3.0
> > > [2021-11-12 21:23:55,354][ceph_volume.process][INFO  ] stdout
> > > ID_PATH_ATA_COMPAT=pci-0000:00:1f.2-ata-3
> > > [2021-11-12 21:23:55,354][ceph_volume.process][INFO  ] stdout
> > > ID_PATH_TAG=pci-0000_00_1f_2-ata-3_0
> > > [2021-11-12 21:23:55,354][ceph_volume.process][INFO  ] stdout
> > > ID_REVISION=80.00A80
> > > [2021-11-12 21:23:55,355][ceph_volume.process][INFO  ] stdout
> > > ID_SERIAL=WDC_WD30EFRX-68EUZN0_WD-WCC4N99Z68Y4
> > > [2021-11-12 21:23:55,355][ceph_volume.process][INFO  ] stdout
> > > ID_SERIAL_SHORT=WD-WCC4N99Z68Y4
> > > [2021-11-12 21:23:55,355][ceph_volume.process][INFO  ] stdout
> > > ID_TYPE=disk
> > > [2021-11-12 21:23:55,355][ceph_volume.process][INFO  ] stdout
> > > ID_WWN=0x50014ee20a61fcc0
> > > [2021-11-12 21:23:55,355][ceph_volume.process][INFO  ] stdout
> > > ID_WWN_WITH_EXTENSION=0x50014ee20a61fcc0
> > > [2021-11-12 21:23:55,355][ceph_volume.process][INFO  ] stdout MAJOR=8
> > > [2021-11-12 21:23:55,355][ceph_volume.process][INFO  ] stdout MINOR=16
> > > [2021-11-12 21:23:55,355][ceph_volume.process][INFO  ] stdout
> > > SUBSYSTEM=block [2021-11-12 21:23:55,355][ceph_volume.process][INFO  ]
> > > stdout SYSTEMD_ALIAS=/ dev/block/8:16
> > > [2021-11-12 21:23:55,355][ceph_volume.process][INFO  ] stdout
> > > SYSTEMD_READY=1 [2021-11-12 21:23:55,355][ceph_volume.process][INFO  ]
> > > stdout
> > > SYSTEMD_WANTS=lvm2-pvscan@8:16.service
> > > [2021-11-12 21:23:55,355][ceph_volume.process][INFO  ] stdout
> > > TAGS=:systemd:
> 
>  [2021-11-12 21:23:55,356][ceph_volume.process][INFO  ]
> 
> > > stdout
> > > USEC_INITIALIZED=4535845
> > > [2021-11-12 21:23:55,356][ceph_volume.process][INFO  ] Running command:
> > > /usr/ sbin/lvs --noheadings --readonly --separator=";" -a --units=b
> > > --nosuffix -S lv_path=/dev/sdc -o
> > > lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size [2021-11-12
> > > 21:23:55,406][ceph_volume.process][INFO  ] Running command: /usr/
> > > bin/lsblk
> 
>  --nodeps -P -o
> 
> > > NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,O
> > > WN
> > > ER
> 
>  ,GROUP,MODE,ALIGNMENT,PHY-
> 
> > > SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,DISC-
> > > ZERO,PKNAME,PARTLABEL /dev/sdc
> > > [2021-11-12 21:23:55,412][ceph_volume.process][INFO  ] stdout
> > > NAME="sdc"
> > > KNAME="sdc" MAJ:MIN="8:32" FSTYPE="LVM2_member" MOUNTPOINT="" LABEL=""
> > > UUID="ywCsJt-kk1E-rTKp-eBLr-ffSo-IF0j-3ntdjZ" RO="0" RM="0"
> > > MODEL="TOSHIBA
> > > HDWD130 " SIZE="2.7T" STATE="running" OWNER="root" GROUP="disk"
> > > MODE="brw-
> > > rw----" ALIGNMENT="0" PHY-SEC="4096" LOG-SEC="512" ROTA="1" SCHED="mq-
> > > deadline" TYPE="disk" DISC-ALN="0" DISC-GRAN="0B" DISC-MAX="0B"
> > > DISC-ZERO="0" PKNAME="" PARTLABEL=""
> > > [2021-11-12 21:23:55,413][ceph_volume.process][INFO  ] Running command:
> > > /usr/ sbin/blkid -c /dev/null -p /dev/sdc
> > > [2021-11-12 21:23:55,416][ceph_volume.process][INFO  ] stdout /dev/sdc:
> > > UUID="ywCsJt-kk1E-rTKp-eBLr-ffSo-IF0j-3ntdjZ" VERSION="LVM2 001"
> > > TYPE="LVM2_member" USAGE="raid"
> > > [2021-11-12 21:23:55,417][ceph_volume.process][INFO  ] Running command:
> > > /usr/ sbin/pvs --noheadings --readonly --units=b --nosuffix
> > > --separator=";"
> 
>  -o
> 
> > > vg_name,pv_count,lv_count,vg_attr,vg_extent_count,vg_free_count,vg_exten
> > > t_
> > > s
> 
>  ize /dev/sdc
> 
> > > [2021-11-12 21:23:55,466][ceph_volume.process][INFO  ] stdout
> > > ceph-09c84540-06e3-496f-bf90-45f59748768a";"1";"1";"wz--
> > > n-";"715396";"0";"4194304
> > > [2021-11-12 21:23:55,466][ceph_volume.process][INFO  ] Running command:
> > > /usr/ sbin/pvs --noheadings --readonly --separator=";" -a --units=b
> > > --nosuffix -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size /dev/sdc
> > > [2021-11-12 21:23:55,542][ceph_volume.process][INFO  ] stdout
> > > ceph.block_device=/dev/ceph-09c84540-06e3-496f-bf90-45f59748768a/osd-
> > > block-60ad9fb3-8144-4d06-83e6-b90e9d52dcaa,ceph.block_uuid=h352xv-O3aB-q
> > > RA
> > > Q-
> 
>  rWUa-alhz-7b9H-o30rfs,ceph.cephx_lockbox_secret=AQB+/
> 
> > > FFhtwkAIxAAWVWk87T5duTfEIDtQGGqjg==,ceph.cluster_fsid=d82ced8c-1f6e-11ec
> > > -
> > > a2e4-00fd45fcaf9c,ceph.cluster_name=ceph,ceph.crush_device_class=None,ce
> > > ph
> > > .e
> 
>  ncrypted=1,ceph.osd_fsid=60ad9fb3-8144-4d06-83e6-
> 
> > > b90e9d52dcaa,ceph.osd_id=4,ceph.osdspec_affinity=default_drive_group,cep
> > > h.
> > > t
> 
>  ype=block,ceph.vdo=0";"/
> 
> > > dev/ceph-09c84540-06e3-496f-bf90-45f59748768a/osd-
> > > block-60ad9fb3-8144-4d06-83e6-b90e9d52dcaa";"osd-
> > > block-60ad9fb3-8144-4d06-83e6-b90e9d52dcaa";"ceph-09c84540-06e3-496f-
> > > bf90-45f59748768a";"h352xv-O3aB-qRAQ-rWUa-alhz-7b9H-o30rfs";"30005883043
> > > 84
> > > 
> > > [2021-11-12 21:23:55,543][ceph_volume.util.disk][INFO  ] opening device
> > > /dev/ sdc to check for BlueStore label
> > > [2021-11-12 21:23:55,543][ceph_volume.process][INFO  ] Running command:
> > > /usr/ sbin/udevadm info --query=property /dev/sdc
> > > [2021-11-12 21:23:55,555][ceph_volume.process][INFO  ] stdout
> > > DEVLINKS=/dev/
> > > disk/by-id/lvm-pv-uuid-ywCsJt-kk1E-rTKp-eBLr-ffSo-IF0j-3ntdjZ
> > > /dev/disk/by-id/ wwn-0x5000039fe6d5d3d9
> > > /dev/disk/by-path/pci-0000:00:1f.2-ata-4.0 /dev/disk/
> > > by-path/pci-0000:00:1f.2-ata-4
> > > /dev/disk/by-id/ata-TOSHIBA_HDWD130_674K0JUAS [2021-11-12
> > > 21:23:55,556][ceph_volume.process][INFO  ] stdout DEVNAME=/dev/sdc
> > > [2021-11-12 21:23:55,556][ceph_volume.process][INFO  ] stdout DEVPATH=/
> > > devices/pci0000:00/0000:00:1f.2/ata4/host3/target3:0:0/3:0:0:0/block/sdc
> > > 
> > > [2021-11-12 21:23:55,556][ceph_volume.process][INFO  ] stdout
> > > DEVTYPE=disk
> > > [2021-11-12 21:23:55,556][ceph_volume.process][INFO  ] stdout ID_ATA=1
> > > [2021-11-12 21:23:55,556][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_DOWNLOAD_MICROCODE=1
> > > [2021-11-12 21:23:55,556][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_APM=1
> > > [2021-11-12 21:23:55,556][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_APM_ENABLED=0
> > > [2021-11-12 21:23:55,557][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_HPA=1
> > > [2021-11-12 21:23:55,557][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_HPA_ENABLED=1
> > > [2021-11-12 21:23:55,557][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_PM=1
> > > [2021-11-12 21:23:55,557][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_PM_ENABLED=1
> > > [2021-11-12 21:23:55,557][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_PUIS=1
> > > [2021-11-12 21:23:55,557][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_PUIS_ENABLED=0
> > > [2021-11-12 21:23:55,557][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_SECURITY=1
> > > [2021-11-12 21:23:55,557][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_SECURITY_ENABLED=0
> > > [2021-11-12 21:23:55,558][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_SECURITY_ERASE_UNIT_MIN=492
> > > [2021-11-12 21:23:55,558][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_SMART=1
> > > [2021-11-12 21:23:55,558][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_SMART_ENABLED=1
> > > [2021-11-12 21:23:55,558][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_ROTATION_RATE_RPM=7200
> > > [2021-11-12 21:23:55,558][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_SATA=1
> 
>  [2021-11-12 21:23:55,558][ceph_volume.process][INFO  ]
> 
> > > stdout
> > > ID_ATA_SATA_SIGNAL_RATE_GEN1=1
> > > [2021-11-12 21:23:55,558][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_SATA_SIGNAL_RATE_GEN2=1
> > > [2021-11-12 21:23:55,558][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_WRITE_CACHE=1
> > > [2021-11-12 21:23:55,558][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_WRITE_CACHE_ENABLED=1
> > > [2021-11-12 21:23:55,558][ceph_volume.process][INFO  ] stdout
> > > ID_BUS=ata
> > > [2021-11-12 21:23:55,558][ceph_volume.process][INFO  ] stdout
> > > ID_FS_TYPE=LVM2_member
> > > [2021-11-12 21:23:55,558][ceph_volume.process][INFO  ] stdout
> > > ID_FS_USAGE=raid [2021-11-12 21:23:55,559][ceph_volume.process][INFO  ]
> > > stdout
> > > ID_FS_UUID=ywCsJt-kk1E-rTKp-eBLr-ffSo-IF0j-3ntdjZ
> > > [2021-11-12 21:23:55,559][ceph_volume.process][INFO  ] stdout
> > > ID_FS_UUID_ENC=ywCsJt-kk1E-rTKp-eBLr-ffSo-IF0j-3ntdjZ
> > > [2021-11-12 21:23:55,559][ceph_volume.process][INFO  ] stdout
> > > ID_FS_VERSION=LVM2 001
> > > [2021-11-12 21:23:55,559][ceph_volume.process][INFO  ] stdout
> > > ID_MODEL=TOSHIBA_HDWD130
> > > [2021-11-12 21:23:55,559][ceph_volume.process][INFO  ] stdout
> > > ID_MODEL_ENC=TOSHIBA\x20HDWD130\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
> > > x2
> > > 0\
> 
>  x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20 [2021-11-12
> 
> > > 21:23:55,559][ceph_volume.process][INFO  ] stdout
> > > ID_PATH=pci-0000:00:1f.2-ata-4.0
> > > [2021-11-12 21:23:55,559][ceph_volume.process][INFO  ] stdout
> > > ID_PATH_ATA_COMPAT=pci-0000:00:1f.2-ata-4
> > > [2021-11-12 21:23:55,559][ceph_volume.process][INFO  ] stdout
> > > ID_PATH_TAG=pci-0000_00_1f_2-ata-4_0
> > > [2021-11-12 21:23:55,559][ceph_volume.process][INFO  ] stdout
> > > ID_REVISION=MX6OACF0
> > > [2021-11-12 21:23:55,559][ceph_volume.process][INFO  ] stdout
> > > ID_SERIAL=TOSHIBA_HDWD130_674K0JUAS
> > > [2021-11-12 21:23:55,559][ceph_volume.process][INFO  ] stdout
> > > ID_SERIAL_SHORT=674K0JUAS
> > > [2021-11-12 21:23:55,560][ceph_volume.process][INFO  ] stdout
> > > ID_TYPE=disk
> > > [2021-11-12 21:23:55,560][ceph_volume.process][INFO  ] stdout
> > > ID_WWN=0x5000039fe6d5d3d9
> > > [2021-11-12 21:23:55,560][ceph_volume.process][INFO  ] stdout
> > > ID_WWN_WITH_EXTENSION=0x5000039fe6d5d3d9
> > > [2021-11-12 21:23:55,560][ceph_volume.process][INFO  ] stdout MAJOR=8
> > > [2021-11-12 21:23:55,560][ceph_volume.process][INFO  ] stdout MINOR=32
> > > [2021-11-12 21:23:55,560][ceph_volume.process][INFO  ] stdout
> > > SUBSYSTEM=block [2021-11-12 21:23:55,560][ceph_volume.process][INFO  ]
> > > stdout SYSTEMD_ALIAS=/ dev/block/8:32
> > > [2021-11-12 21:23:55,560][ceph_volume.process][INFO  ] stdout
> > > SYSTEMD_READY=1 [2021-11-12 21:23:55,560][ceph_volume.process][INFO  ]
> > > stdout
> > > SYSTEMD_WANTS=lvm2-pvscan@8:32.service
> > > [2021-11-12 21:23:55,560][ceph_volume.process][INFO  ] stdout
> > > TAGS=:systemd:
> 
>  [2021-11-12 21:23:55,560][ceph_volume.process][INFO  ]
> 
> > > stdout
> > > USEC_INITIALIZED=4505597
> > > [2021-11-12 21:23:55,561][ceph_volume.process][INFO  ] Running command:
> > > /usr/ sbin/lvs --noheadings --readonly --separator=";" -a --units=b
> > > --nosuffix -S lv_path=/dev/sdd -o
> > > lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size [2021-11-12
> > > 21:23:55,622][ceph_volume.process][INFO  ] Running command: /usr/
> > > bin/lsblk
> 
>  --nodeps -P -o
> 
> > > NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,O
> > > WN
> > > ER
> 
>  ,GROUP,MODE,ALIGNMENT,PHY-
> 
> > > SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,DISC-
> > > ZERO,PKNAME,PARTLABEL /dev/sdd
> > > [2021-11-12 21:23:55,630][ceph_volume.process][INFO  ] stdout
> > > NAME="sdd"
> > > KNAME="sdd" MAJ:MIN="8:48" FSTYPE="" MOUNTPOINT="" LABEL="" UUID=""
> > > RO="0"
> > > RM="0" MODEL="CT240BX500SSD1  " SIZE="223.6G" STATE="running"
> > > OWNER="root"
> > > GROUP="disk" MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="512"
> > > LOG-SEC="512"
> > > ROTA="0" SCHED="mq-deadline" TYPE="disk" DISC-ALN="0" DISC-GRAN="512B"
> > > DISC-
> 
>  MAX="2G" DISC-ZERO="0" PKNAME="" PARTLABEL=""
> 
> > > [2021-11-12 21:23:55,631][ceph_volume.process][INFO  ] Running command:
> > > /usr/ sbin/blkid -c /dev/null -p /dev/sdd
> > > [2021-11-12 21:23:55,636][ceph_volume.process][INFO  ] stdout /dev/sdd:
> > > PTUUID="9a4155cb" PTTYPE="dos"
> > > [2021-11-12 21:23:55,636][ceph_volume.process][INFO  ] Running command:
> > > /usr/ sbin/pvs --noheadings --readonly --units=b --nosuffix
> > > --separator=";"
> 
>  -o
> 
> > > vg_name,pv_count,lv_count,vg_attr,vg_extent_count,vg_free_count,vg_exten
> > > t_
> > > s
> 
>  ize /dev/sdd
> 
> > > [2021-11-12 21:23:55,689][ceph_volume.process][INFO  ] stderr Cannot
> > > use
> > > /dev/ sdd: device is partitioned
> > > [2021-11-12 21:23:55,690][ceph_volume.process][INFO  ] Running command:
> > > /usr/ sbin/pvs --noheadings --readonly --units=b --nosuffix
> > > --separator=";"
> 
>  -o
> 
> > > vg_name,pv_count,lv_count,vg_attr,vg_extent_count,vg_free_count,vg_exten
> > > t_
> > > s
> 
>  ize /dev/sdd2
> 
> > > [2021-11-12 21:23:55,750][ceph_volume.process][INFO  ] stderr Cannot
> > > use
> > > /dev/ sdd2: device is too small (pv_min_size)
> > > [2021-11-12 21:23:55,750][ceph_volume.process][INFO  ] Running command:
> > > /usr/ sbin/pvs --noheadings --readonly --units=b --nosuffix
> > > --separator=";"
> 
>  -o
> 
> > > vg_name,pv_count,lv_count,vg_attr,vg_extent_count,vg_free_count,vg_exten
> > > t_
> > > s
> 
>  ize /dev/sdd5
> 
> > > [2021-11-12 21:23:55,818][ceph_volume.process][INFO  ] stderr Failed to
> > > find
> 
>  physical volume "/dev/sdd5".
> 
> > > [2021-11-12 21:23:55,818][ceph_volume.process][INFO  ] Running command:
> > > /usr/ sbin/pvs --noheadings --readonly --units=b --nosuffix
> > > --separator=";"
> 
>  -o
> 
> > > vg_name,pv_count,lv_count,vg_attr,vg_extent_count,vg_free_count,vg_exten
> > > t_
> > > s
> 
>  ize /dev/sdd1
> 
> > > [2021-11-12 21:23:55,882][ceph_volume.process][INFO  ] stderr Failed to
> > > find
> 
>  physical volume "/dev/sdd1".
> 
> > > [2021-11-12 21:23:55,883][ceph_volume.process][INFO  ] Running command:
> > > /usr/ sbin/lvs --noheadings --readonly --separator=";" -a --units=b
> > > --nosuffix -S lv_path=/dev/sdd2 -o
> > > lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size [2021-11-12
> > > 21:23:55,958][ceph_volume.process][INFO  ] Running command: /usr/
> > > bin/lsblk
> 
>  --nodeps -P -o
> 
> > > NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,O
> > > WN
> > > ER
> 
>  ,GROUP,MODE,ALIGNMENT,PHY-
> 
> > > SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,DISC-
> > > ZERO,PKNAME,PARTLABEL /dev/sdd2
> > > [2021-11-12 21:23:55,966][ceph_volume.process][INFO  ] stdout
> > > NAME="sdd2"
> > > KNAME="sdd2" MAJ:MIN="8:50" FSTYPE="" MOUNTPOINT="" LABEL="" UUID=""
> > > RO="0"
> 
>  RM="0" MODEL="" SIZE="1K" STATE="" OWNER="root" GROUP="disk"
> 
> > > MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="0"
> > > SCHED="mq-deadline" TYPE="part" DISC-ALN="0" DISC-GRAN="512B"
> > > DISC-MAX="2G"
> 
>  DISC-ZERO="0" PKNAME="sdd" PARTLABEL=""
> 
> > > [2021-11-12 21:23:55,967][ceph_volume.process][INFO  ] Running command:
> > > /usr/ sbin/blkid -c /dev/null -p /dev/sdd2
> > > [2021-11-12 21:23:55,971][ceph_volume.process][INFO  ] stdout
> > > /dev/sdd2:
> > > PTUUID="4808244c" PTTYPE="dos" PART_ENTRY_SCHEME="dos"
> > > PART_ENTRY_UUID="9a4155cb-02" PART_ENTRY_TYPE="0x5"
> > > PART_ENTRY_NUMBER="2"
> > > PART_ENTRY_OFFSET="1001470" PART_ENTRY_SIZE="467859458"
> > > PART_ENTRY_DISK="8:48" [2021-11-12
> > > 21:23:55,973][ceph_volume.process][INFO
> > > ] Running command: /usr/ sbin/pvs --noheadings --readonly --units=b
> > > --nosuffix --separator=";" -o
> > > vg_name,pv_count,lv_count,vg_attr,vg_extent_count,vg_free_count,vg_exten
> > > t_
> > > s
> 
>  ize /dev/sdd2
> 
> > > [2021-11-12 21:23:56,029][ceph_volume.process][INFO  ] stderr Cannot
> > > use
> > > /dev/ sdd2: device is too small (pv_min_size)
> > > [2021-11-12 21:23:56,030][ceph_volume.util.disk][INFO  ] opening device
> > > /dev/ sdd2 to check for BlueStore label
> > > [2021-11-12 21:23:56,031][ceph_volume.util.disk][INFO  ] opening device
> > > /dev/ sdd to check for BlueStore label
> > > [2021-11-12 21:23:56,031][ceph_volume.util.disk][INFO  ] opening device
> > > /dev/ sdd2 to check for BlueStore label
> > > [2021-11-12 21:23:56,031][ceph_volume.util.disk][INFO  ] opening device
> > > /dev/ sdd to check for BlueStore label
> > > [2021-11-12 21:23:56,032][ceph_volume.process][INFO  ] Running command:
> > > /usr/ sbin/udevadm info --query=property /dev/sdd2
> > > [2021-11-12 21:23:56,042][ceph_volume.process][INFO  ] stdout
> > > DEVLINKS=/dev/
> 
>  disk/by-path/pci-0000:00:1f.2-ata-5.0-part2
> 
> > > /dev/disk/by-id/ata-
> > > CT240BX500SSD1_1944E3D4E7BB-part2 /dev/disk/by-partuuid/9a4155cb-02
> > > /dev/disk/ by-path/pci-0000:00:1f.2-ata-5-part2
> > > [2021-11-12 21:23:56,043][ceph_volume.process][INFO  ] stdout
> > > DEVNAME=/dev/
> 
>  sdd2
> 
> > > [2021-11-12 21:23:56,043][ceph_volume.process][INFO  ] stdout DEVPATH=/
> > > devices/pci0000:00/0000:00:1f.2/ata5/host4/target4:0:0/4:0:0:0/block/sdd
> > > /s
> > > dd
> 
>  2 [2021-11-12 21:23:56,043][ceph_volume.process][INFO  ] stdout
> 
> > > DEVTYPE=partition
> > > [2021-11-12 21:23:56,043][ceph_volume.process][INFO  ] stdout ID_ATA=1
> > > [2021-11-12 21:23:56,043][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_DOWNLOAD_MICROCODE=1
> > > [2021-11-12 21:23:56,043][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_APM=1
> > > [2021-11-12 21:23:56,043][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_APM_CURRENT_VALUE=254
> > > [2021-11-12 21:23:56,043][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_APM_ENABLED=1
> > > [2021-11-12 21:23:56,043][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_HPA=1
> > > [2021-11-12 21:23:56,044][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_HPA_ENABLED=1
> > > [2021-11-12 21:23:56,044][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_PM=1
> > > [2021-11-12 21:23:56,044][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_PM_ENABLED=1
> > > [2021-11-12 21:23:56,044][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_SECURITY=1
> > > [2021-11-12 21:23:56,045][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_SECURITY_ENABLED=0
> > > [2021-11-12 21:23:56,045][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_SECURITY_ENHANCED_ERASE_UNIT_MIN=2
> > > [2021-11-12 21:23:56,045][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_SECURITY_ERASE_UNIT_MIN=2
> > > [2021-11-12 21:23:56,045][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_SMART=1
> > > [2021-11-12 21:23:56,045][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_SMART_ENABLED=1
> > > [2021-11-12 21:23:56,045][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_ROTATION_RATE_RPM=0
> > > [2021-11-12 21:23:56,045][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_SATA=1
> 
>  [2021-11-12 21:23:56,045][ceph_volume.process][INFO  ]
> 
> > > stdout
> > > ID_ATA_SATA_SIGNAL_RATE_GEN1=1
> > > [2021-11-12 21:23:56,045][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_SATA_SIGNAL_RATE_GEN2=1
> > > [2021-11-12 21:23:56,045][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_WRITE_CACHE=1
> > > [2021-11-12 21:23:56,045][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_WRITE_CACHE_ENABLED=1
> > > [2021-11-12 21:23:56,045][ceph_volume.process][INFO  ] stdout
> > > ID_BUS=ata
> > > [2021-11-12 21:23:56,046][ceph_volume.process][INFO  ] stdout
> > > ID_MODEL=CT240BX500SSD1
> > > [2021-11-12 21:23:56,046][ceph_volume.process][INFO  ] stdout
> > > ID_MODEL_ENC=CT240BX500SSD1\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
> > > x2
> > > 0\
> 
>  x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20 [2021-11-12
> 
> > > 21:23:56,046][ceph_volume.process][INFO  ] stdout
> > > ID_PART_ENTRY_DISK=8:48
> > > [2021-11-12 21:23:56,046][ceph_volume.process][INFO  ] stdout
> > > ID_PART_ENTRY_NUMBER=2
> > > [2021-11-12 21:23:56,046][ceph_volume.process][INFO  ] stdout
> > > ID_PART_ENTRY_OFFSET=1001470
> > > [2021-11-12 21:23:56,046][ceph_volume.process][INFO  ] stdout
> > > ID_PART_ENTRY_SCHEME=dos
> > > [2021-11-12 21:23:56,046][ceph_volume.process][INFO  ] stdout
> > > ID_PART_ENTRY_SIZE=467859458
> > > [2021-11-12 21:23:56,046][ceph_volume.process][INFO  ] stdout
> > > ID_PART_ENTRY_TYPE=0x5
> > > [2021-11-12 21:23:56,046][ceph_volume.process][INFO  ] stdout
> > > ID_PART_ENTRY_UUID=9a4155cb-02
> > > [2021-11-12 21:23:56,046][ceph_volume.process][INFO  ] stdout
> > > ID_PART_TABLE_TYPE=dos
> > > [2021-11-12 21:23:56,046][ceph_volume.process][INFO  ] stdout
> > > ID_PART_TABLE_UUID=4808244c
> > > [2021-11-12 21:23:56,046][ceph_volume.process][INFO  ] stdout
> > > ID_PATH=pci-0000:00:1f.2-ata-5.0
> > > [2021-11-12 21:23:56,046][ceph_volume.process][INFO  ] stdout
> > > ID_PATH_ATA_COMPAT=pci-0000:00:1f.2-ata-5
> > > [2021-11-12 21:23:56,046][ceph_volume.process][INFO  ] stdout
> > > ID_PATH_TAG=pci-0000_00_1f_2-ata-5_0
> > > [2021-11-12 21:23:56,047][ceph_volume.process][INFO  ] stdout
> > > ID_REVISION=M6CR013
> > > [2021-11-12 21:23:56,047][ceph_volume.process][INFO  ] stdout
> > > ID_SERIAL=CT240BX500SSD1_1944E3D4E7BB
> > > [2021-11-12 21:23:56,047][ceph_volume.process][INFO  ] stdout
> > > ID_SERIAL_SHORT=1944E3D4E7BB
> > > [2021-11-12 21:23:56,047][ceph_volume.process][INFO  ] stdout
> > > ID_TYPE=disk
> > > [2021-11-12 21:23:56,047][ceph_volume.process][INFO  ] stdout MAJOR=8
> > > [2021-11-12 21:23:56,047][ceph_volume.process][INFO  ] stdout MINOR=50
> > > [2021-11-12 21:23:56,047][ceph_volume.process][INFO  ] stdout PARTN=2
> > > [2021-11-12 21:23:56,047][ceph_volume.process][INFO  ] stdout
> > > SUBSYSTEM=block [2021-11-12 21:23:56,047][ceph_volume.process][INFO  ]
> > > stdout TAGS=:systemd: [2021-11-12
> > > 21:23:56,047][ceph_volume.process][INFO
> > > ] stdout
> > > USEC_INITIALIZED=4428059
> > > [2021-11-12 21:23:56,048][ceph_volume.process][INFO  ] Running command:
> > > /usr/ sbin/lvs --noheadings --readonly --separator=";" -a --units=b
> > > --nosuffix -S lv_path=/dev/sdd5 -o
> > > lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size [2021-11-12
> > > 21:23:56,118][ceph_volume.process][INFO  ] Running command: /usr/
> > > bin/lsblk
> 
>  --nodeps -P -o
> 
> > > NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,O
> > > WN
> > > ER
> 
>  ,GROUP,MODE,ALIGNMENT,PHY-
> 
> > > SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,DISC-
> > > ZERO,PKNAME,PARTLABEL /dev/sdd5
> > > [2021-11-12 21:23:56,126][ceph_volume.process][INFO  ] stdout
> > > NAME="sdd5"
> > > KNAME="sdd5" MAJ:MIN="8:53" FSTYPE="crypto_LUKS" MOUNTPOINT="" LABEL=""
> > > UUID="fbfc2e93-1c31-469b-80ce-0805c065be6f" RO="0" RM="0" MODEL=""
> > > SIZE="223.1G" STATE="" OWNER="root" GROUP="disk" MODE="brw-rw----"
> > > ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="0" SCHED="mq-deadline"
> > > TYPE="part" DISC-ALN="0" DISC-GRAN="512B" DISC-MAX="2G" DISC-ZERO="0"
> > > PKNAME="sdd" PARTLABEL=""
> > > [2021-11-12 21:23:56,127][ceph_volume.process][INFO  ] Running command:
> > > /usr/ sbin/blkid -c /dev/null -p /dev/sdd5
> > > [2021-11-12 21:23:56,131][ceph_volume.process][INFO  ] stdout
> > > /dev/sdd5:
> > > VERSION="2" UUID="fbfc2e93-1c31-469b-80ce-0805c065be6f"
> > > TYPE="crypto_LUKS"
> > > USAGE="crypto" PART_ENTRY_SCHEME="dos" PART_ENTRY_UUID="9a4155cb-05"
> > > PART_ENTRY_TYPE="0x83" PART_ENTRY_NUMBER="5"
> > > PART_ENTRY_OFFSET="1001472"
> > > PART_ENTRY_SIZE="467859456" PART_ENTRY_DISK="8:48"
> > > [2021-11-12 21:23:56,132][ceph_volume.process][INFO  ] Running command:
> > > /usr/ sbin/pvs --noheadings --readonly --units=b --nosuffix
> > > --separator=";"
> 
>  -o
> 
> > > vg_name,pv_count,lv_count,vg_attr,vg_extent_count,vg_free_count,vg_exten
> > > t_
> > > s
> 
>  ize /dev/sdd5
> 
> > > [2021-11-12 21:23:56,198][ceph_volume.process][INFO  ] stderr Failed to
> > > find
> 
>  physical volume "/dev/sdd5".
> 
> > > [2021-11-12 21:23:56,198][ceph_volume.util.disk][INFO  ] opening device
> > > /dev/ sdd5 to check for BlueStore label
> > > [2021-11-12 21:23:56,199][ceph_volume.util.disk][INFO  ] opening device
> > > /dev/ sdd to check for BlueStore label
> > > [2021-11-12 21:23:56,199][ceph_volume.util.disk][INFO  ] opening device
> > > /dev/ sdd5 to check for BlueStore label
> > > [2021-11-12 21:23:56,199][ceph_volume.util.disk][INFO  ] opening device
> > > /dev/ sdd to check for BlueStore label
> > > [2021-11-12 21:23:56,199][ceph_volume.process][INFO  ] Running command:
> > > /usr/ sbin/udevadm info --query=property /dev/sdd5
> > > [2021-11-12 21:23:56,212][ceph_volume.process][INFO  ] stdout
> > > DEVLINKS=/dev/
> 
>  disk/by-uuid/fbfc2e93-1c31-469b-80ce-0805c065be6f
> 
> > > /dev/disk/by-partuuid/ 9a4155cb-05
> > > /dev/disk/by-path/pci-0000:00:1f.2-ata-5.0-part5 /dev/disk/by-
> > > path/pci-0000:00:1f.2-ata-5-part5 /dev/disk/by-id/ata-
> > > CT240BX500SSD1_1944E3D4E7BB-part5
> > > [2021-11-12 21:23:56,213][ceph_volume.process][INFO  ] stdout
> > > DEVNAME=/dev/
> 
>  sdd5
> 
> > > [2021-11-12 21:23:56,213][ceph_volume.process][INFO  ] stdout DEVPATH=/
> > > devices/pci0000:00/0000:00:1f.2/ata5/host4/target4:0:0/4:0:0:0/block/sdd
> > > /s
> > > dd
> 
>  5 [2021-11-12 21:23:56,213][ceph_volume.process][INFO  ] stdout
> 
> > > DEVTYPE=partition
> > > [2021-11-12 21:23:56,213][ceph_volume.process][INFO  ] stdout ID_ATA=1
> > > [2021-11-12 21:23:56,213][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_DOWNLOAD_MICROCODE=1
> > > [2021-11-12 21:23:56,213][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_APM=1
> > > [2021-11-12 21:23:56,213][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_APM_CURRENT_VALUE=254
> > > [2021-11-12 21:23:56,214][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_APM_ENABLED=1
> > > [2021-11-12 21:23:56,214][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_HPA=1
> > > [2021-11-12 21:23:56,214][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_HPA_ENABLED=1
> > > [2021-11-12 21:23:56,214][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_PM=1
> > > [2021-11-12 21:23:56,214][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_PM_ENABLED=1
> > > [2021-11-12 21:23:56,214][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_SECURITY=1
> > > [2021-11-12 21:23:56,214][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_SECURITY_ENABLED=0
> > > [2021-11-12 21:23:56,214][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_SECURITY_ENHANCED_ERASE_UNIT_MIN=2
> > > [2021-11-12 21:23:56,214][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_SECURITY_ERASE_UNIT_MIN=2
> > > [2021-11-12 21:23:56,214][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_SMART=1
> > > [2021-11-12 21:23:56,214][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_SMART_ENABLED=1
> > > [2021-11-12 21:23:56,215][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_ROTATION_RATE_RPM=0
> > > [2021-11-12 21:23:56,215][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_SATA=1
> 
>  [2021-11-12 21:23:56,215][ceph_volume.process][INFO  ]
> 
> > > stdout
> > > ID_ATA_SATA_SIGNAL_RATE_GEN1=1
> > > [2021-11-12 21:23:56,215][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_SATA_SIGNAL_RATE_GEN2=1
> > > [2021-11-12 21:23:56,215][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_WRITE_CACHE=1
> > > [2021-11-12 21:23:56,215][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_WRITE_CACHE_ENABLED=1
> > > [2021-11-12 21:23:56,215][ceph_volume.process][INFO  ] stdout
> > > ID_BUS=ata
> > > [2021-11-12 21:23:56,215][ceph_volume.process][INFO  ] stdout
> > > ID_FS_TYPE=crypto_LUKS
> > > [2021-11-12 21:23:56,215][ceph_volume.process][INFO  ] stdout
> > > ID_FS_USAGE=crypto
> > > [2021-11-12 21:23:56,215][ceph_volume.process][INFO  ] stdout
> > > ID_FS_UUID=fbfc2e93-1c31-469b-80ce-0805c065be6f
> > > [2021-11-12 21:23:56,215][ceph_volume.process][INFO  ] stdout
> > > ID_FS_UUID_ENC=fbfc2e93-1c31-469b-80ce-0805c065be6f
> > > [2021-11-12 21:23:56,215][ceph_volume.process][INFO  ] stdout
> > > ID_FS_VERSION=2 [2021-11-12 21:23:56,216][ceph_volume.process][INFO  ]
> > > stdout
> > > ID_MODEL=CT240BX500SSD1
> > > [2021-11-12 21:23:56,216][ceph_volume.process][INFO  ] stdout
> > > ID_MODEL_ENC=CT240BX500SSD1\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
> > > x2
> > > 0\
> 
>  x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20 [2021-11-12
> 
> > > 21:23:56,216][ceph_volume.process][INFO  ] stdout
> > > ID_PART_ENTRY_DISK=8:48
> > > [2021-11-12 21:23:56,216][ceph_volume.process][INFO  ] stdout
> > > ID_PART_ENTRY_NUMBER=5
> > > [2021-11-12 21:23:56,216][ceph_volume.process][INFO  ] stdout
> > > ID_PART_ENTRY_OFFSET=1001472
> > > [2021-11-12 21:23:56,216][ceph_volume.process][INFO  ] stdout
> > > ID_PART_ENTRY_SCHEME=dos
> > > [2021-11-12 21:23:56,216][ceph_volume.process][INFO  ] stdout
> > > ID_PART_ENTRY_SIZE=467859456
> > > [2021-11-12 21:23:56,216][ceph_volume.process][INFO  ] stdout
> > > ID_PART_ENTRY_TYPE=0x83
> > > [2021-11-12 21:23:56,216][ceph_volume.process][INFO  ] stdout
> > > ID_PART_ENTRY_UUID=9a4155cb-05
> > > [2021-11-12 21:23:56,216][ceph_volume.process][INFO  ] stdout
> > > ID_PART_TABLE_TYPE=dos
> > > [2021-11-12 21:23:56,216][ceph_volume.process][INFO  ] stdout
> > > ID_PART_TABLE_UUID=9a4155cb
> > > [2021-11-12 21:23:56,217][ceph_volume.process][INFO  ] stdout
> > > ID_PATH=pci-0000:00:1f.2-ata-5.0
> > > [2021-11-12 21:23:56,217][ceph_volume.process][INFO  ] stdout
> > > ID_PATH_ATA_COMPAT=pci-0000:00:1f.2-ata-5
> > > [2021-11-12 21:23:56,217][ceph_volume.process][INFO  ] stdout
> > > ID_PATH_TAG=pci-0000_00_1f_2-ata-5_0
> > > [2021-11-12 21:23:56,217][ceph_volume.process][INFO  ] stdout
> > > ID_REVISION=M6CR013
> > > [2021-11-12 21:23:56,217][ceph_volume.process][INFO  ] stdout
> > > ID_SERIAL=CT240BX500SSD1_1944E3D4E7BB
> > > [2021-11-12 21:23:56,217][ceph_volume.process][INFO  ] stdout
> > > ID_SERIAL_SHORT=1944E3D4E7BB
> > > [2021-11-12 21:23:56,217][ceph_volume.process][INFO  ] stdout
> > > ID_TYPE=disk
> > > [2021-11-12 21:23:56,217][ceph_volume.process][INFO  ] stdout MAJOR=8
> > > [2021-11-12 21:23:56,217][ceph_volume.process][INFO  ] stdout MINOR=53
> > > [2021-11-12 21:23:56,217][ceph_volume.process][INFO  ] stdout PARTN=5
> > > [2021-11-12 21:23:56,217][ceph_volume.process][INFO  ] stdout
> > > SUBSYSTEM=block [2021-11-12 21:23:56,217][ceph_volume.process][INFO  ]
> > > stdout TAGS=:systemd: [2021-11-12
> > > 21:23:56,218][ceph_volume.process][INFO
> > > ] stdout
> > > USEC_INITIALIZED=4429765
> > > [2021-11-12 21:23:56,218][ceph_volume.process][INFO  ] Running command:
> > > /usr/ sbin/lvs --noheadings --readonly --separator=";" -a --units=b
> > > --nosuffix -S lv_path=/dev/sdd1 -o
> > > lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size [2021-11-12
> > > 21:23:56,282][ceph_volume.process][INFO  ] Running command: /usr/
> > > bin/lsblk
> 
>  --nodeps -P -o
> 
> > > NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,O
> > > WN
> > > ER
> 
>  ,GROUP,MODE,ALIGNMENT,PHY-
> 
> > > SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,DISC-
> > > ZERO,PKNAME,PARTLABEL /dev/sdd1
> > > [2021-11-12 21:23:56,288][ceph_volume.process][INFO  ] stdout
> > > NAME="sdd1"
> > > KNAME="sdd1" MAJ:MIN="8:49" FSTYPE="ext2" MOUNTPOINT="" LABEL=""
> > > UUID="47c8d1ee-1c50-4af6-8fd5-001583a6f71f" RO="0" RM="0" MODEL=""
> > > SIZE="487M" STATE="" OWNER="root" GROUP="disk" MODE="brw-rw----"
> > > ALIGNMENT="0" PHY- SEC="512" LOG-SEC="512" ROTA="0" SCHED="mq-deadline"
> > > TYPE="part" DISC-ALN="0" DISC-GRAN="512B" DISC-MAX="2G" DISC-ZERO="0"
> > > PKNAME="sdd" PARTLABEL="" [2021-11-12
> > > 21:23:56,289][ceph_volume.process][INFO  ] Running command: /usr/
> > > sbin/blkid -c /dev/null -p /dev/sdd1
> > > [2021-11-12 21:23:56,293][ceph_volume.process][INFO  ] stdout
> > > /dev/sdd1:
> > > UUID="47c8d1ee-1c50-4af6-8fd5-001583a6f71f" VERSION="1.0"
> > > BLOCK_SIZE="1024"
> 
>  TYPE="ext2" USAGE="filesystem" PART_ENTRY_SCHEME="dos"
> 
> > > PART_ENTRY_UUID="9a4155cb-01" PART_ENTRY_TYPE="0x83"
> > > PART_ENTRY_FLAGS="0x80"
> 
>  PART_ENTRY_NUMBER="1" PART_ENTRY_OFFSET="2048"
> 
> > > PART_ENTRY_SIZE="997376" PART_ENTRY_DISK="8:48"
> > > [2021-11-12 21:23:56,294][ceph_volume.process][INFO  ] Running command:
> > > /usr/ sbin/pvs --noheadings --readonly --units=b --nosuffix
> > > --separator=";"
> 
>  -o
> 
> > > vg_name,pv_count,lv_count,vg_attr,vg_extent_count,vg_free_count,vg_exten
> > > t_
> > > s
> 
>  ize /dev/sdd1
> 
> > > [2021-11-12 21:23:56,350][ceph_volume.process][INFO  ] stderr Failed to
> > > find
> 
>  physical volume "/dev/sdd1".
> 
> > > [2021-11-12 21:23:56,350][ceph_volume.util.disk][INFO  ] opening device
> > > /dev/ sdd1 to check for BlueStore label
> > > [2021-11-12 21:23:56,350][ceph_volume.util.disk][INFO  ] opening device
> > > /dev/ sdd to check for BlueStore label
> > > [2021-11-12 21:23:56,351][ceph_volume.util.disk][INFO  ] opening device
> > > /dev/ sdd1 to check for BlueStore label
> > > [2021-11-12 21:23:56,351][ceph_volume.util.disk][INFO  ] opening device
> > > /dev/ sdd to check for BlueStore label
> > > [2021-11-12 21:23:56,351][ceph_volume.process][INFO  ] Running command:
> > > /usr/ sbin/udevadm info --query=property /dev/sdd1
> > > [2021-11-12 21:23:56,364][ceph_volume.process][INFO  ] stdout
> > > DEVLINKS=/dev/
> 
>  disk/by-partuuid/9a4155cb-01
> 
> > > /dev/disk/by-id/ata-CT240BX500SSD1_1944E3D4E7BB- part1
> > > /dev/disk/by-path/pci-0000:00:1f.2-ata-5.0-part1 /dev/disk/by-uuid/
> > > 47c8d1ee-1c50-4af6-8fd5-001583a6f71f
> > > /dev/disk/by-path/pci-0000:00:1f.2-ata-5- part1
> > > [2021-11-12 21:23:56,364][ceph_volume.process][INFO  ] stdout
> > > DEVNAME=/dev/
> 
>  sdd1
> 
> > > [2021-11-12 21:23:56,364][ceph_volume.process][INFO  ] stdout DEVPATH=/
> > > devices/pci0000:00/0000:00:1f.2/ata5/host4/target4:0:0/4:0:0:0/block/sdd
> > > /s
> > > dd
> 
>  1 [2021-11-12 21:23:56,364][ceph_volume.process][INFO  ] stdout
> 
> > > DEVTYPE=partition
> > > [2021-11-12 21:23:56,364][ceph_volume.process][INFO  ] stdout ID_ATA=1
> > > [2021-11-12 21:23:56,365][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_DOWNLOAD_MICROCODE=1
> > > [2021-11-12 21:23:56,365][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_APM=1
> > > [2021-11-12 21:23:56,365][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_APM_CURRENT_VALUE=254
> > > [2021-11-12 21:23:56,365][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_APM_ENABLED=1
> > > [2021-11-12 21:23:56,365][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_HPA=1
> > > [2021-11-12 21:23:56,365][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_HPA_ENABLED=1
> > > [2021-11-12 21:23:56,365][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_PM=1
> > > [2021-11-12 21:23:56,365][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_PM_ENABLED=1
> > > [2021-11-12 21:23:56,365][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_SECURITY=1
> > > [2021-11-12 21:23:56,365][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_SECURITY_ENABLED=0
> > > [2021-11-12 21:23:56,365][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_SECURITY_ENHANCED_ERASE_UNIT_MIN=2
> > > [2021-11-12 21:23:56,366][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_SECURITY_ERASE_UNIT_MIN=2
> > > [2021-11-12 21:23:56,366][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_SMART=1
> > > [2021-11-12 21:23:56,366][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_SMART_ENABLED=1
> > > [2021-11-12 21:23:56,366][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_ROTATION_RATE_RPM=0
> > > [2021-11-12 21:23:56,366][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_SATA=1
> 
>  [2021-11-12 21:23:56,366][ceph_volume.process][INFO  ]
> 
> > > stdout
> > > ID_ATA_SATA_SIGNAL_RATE_GEN1=1
> > > [2021-11-12 21:23:56,366][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_SATA_SIGNAL_RATE_GEN2=1
> > > [2021-11-12 21:23:56,366][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_WRITE_CACHE=1
> > > [2021-11-12 21:23:56,366][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_WRITE_CACHE_ENABLED=1
> > > [2021-11-12 21:23:56,366][ceph_volume.process][INFO  ] stdout
> > > ID_BUS=ata
> > > [2021-11-12 21:23:56,366][ceph_volume.process][INFO  ] stdout
> > > ID_FS_TYPE=ext2 [2021-11-12 21:23:56,366][ceph_volume.process][INFO  ]
> > > stdout
> > > ID_FS_USAGE=filesystem
> > > [2021-11-12 21:23:56,367][ceph_volume.process][INFO  ] stdout
> > > ID_FS_UUID=47c8d1ee-1c50-4af6-8fd5-001583a6f71f
> > > [2021-11-12 21:23:56,367][ceph_volume.process][INFO  ] stdout
> > > ID_FS_UUID_ENC=47c8d1ee-1c50-4af6-8fd5-001583a6f71f
> > > [2021-11-12 21:23:56,367][ceph_volume.process][INFO  ] stdout
> > > ID_FS_VERSION=1.0
> > > [2021-11-12 21:23:56,367][ceph_volume.process][INFO  ] stdout
> > > ID_MODEL=CT240BX500SSD1
> > > [2021-11-12 21:23:56,367][ceph_volume.process][INFO  ] stdout
> > > ID_MODEL_ENC=CT240BX500SSD1\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
> > > x2
> > > 0\
> 
>  x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20 [2021-11-12
> 
> > > 21:23:56,367][ceph_volume.process][INFO  ] stdout
> > > ID_PART_ENTRY_DISK=8:48
> > > [2021-11-12 21:23:56,367][ceph_volume.process][INFO  ] stdout
> > > ID_PART_ENTRY_FLAGS=0x80
> > > [2021-11-12 21:23:56,367][ceph_volume.process][INFO  ] stdout
> > > ID_PART_ENTRY_NUMBER=1
> > > [2021-11-12 21:23:56,367][ceph_volume.process][INFO  ] stdout
> > > ID_PART_ENTRY_OFFSET=2048
> > > [2021-11-12 21:23:56,367][ceph_volume.process][INFO  ] stdout
> > > ID_PART_ENTRY_SCHEME=dos
> > > [2021-11-12 21:23:56,367][ceph_volume.process][INFO  ] stdout
> > > ID_PART_ENTRY_SIZE=997376
> > > [2021-11-12 21:23:56,367][ceph_volume.process][INFO  ] stdout
> > > ID_PART_ENTRY_TYPE=0x83
> > > [2021-11-12 21:23:56,368][ceph_volume.process][INFO  ] stdout
> > > ID_PART_ENTRY_UUID=9a4155cb-01
> > > [2021-11-12 21:23:56,368][ceph_volume.process][INFO  ] stdout
> > > ID_PART_TABLE_TYPE=dos
> > > [2021-11-12 21:23:56,368][ceph_volume.process][INFO  ] stdout
> > > ID_PART_TABLE_UUID=9a4155cb
> > > [2021-11-12 21:23:56,368][ceph_volume.process][INFO  ] stdout
> > > ID_PATH=pci-0000:00:1f.2-ata-5.0
> > > [2021-11-12 21:23:56,368][ceph_volume.process][INFO  ] stdout
> > > ID_PATH_ATA_COMPAT=pci-0000:00:1f.2-ata-5
> > > [2021-11-12 21:23:56,368][ceph_volume.process][INFO  ] stdout
> > > ID_PATH_TAG=pci-0000_00_1f_2-ata-5_0
> > > [2021-11-12 21:23:56,368][ceph_volume.process][INFO  ] stdout
> > > ID_REVISION=M6CR013
> > > [2021-11-12 21:23:56,368][ceph_volume.process][INFO  ] stdout
> > > ID_SERIAL=CT240BX500SSD1_1944E3D4E7BB
> > > [2021-11-12 21:23:56,368][ceph_volume.process][INFO  ] stdout
> > > ID_SERIAL_SHORT=1944E3D4E7BB
> > > [2021-11-12 21:23:56,368][ceph_volume.process][INFO  ] stdout
> > > ID_TYPE=disk
> > > [2021-11-12 21:23:56,368][ceph_volume.process][INFO  ] stdout MAJOR=8
> > > [2021-11-12 21:23:56,369][ceph_volume.process][INFO  ] stdout MINOR=49
> > > [2021-11-12 21:23:56,369][ceph_volume.process][INFO  ] stdout PARTN=1
> > > [2021-11-12 21:23:56,369][ceph_volume.process][INFO  ] stdout
> > > SUBSYSTEM=block [2021-11-12 21:23:56,369][ceph_volume.process][INFO  ]
> > > stdout TAGS=:systemd: [2021-11-12
> > > 21:23:56,369][ceph_volume.process][INFO
> > > ] stdout
> > > USEC_INITIALIZED=4437580
> > > [2021-11-12 21:23:56,369][ceph_volume.util.disk][INFO  ] opening device
> > > /dev/ sdd to check for BlueStore label
> > > [2021-11-12 21:23:56,370][ceph_volume.process][INFO  ] Running command:
> > > /usr/ sbin/lvs --noheadings --readonly --separator=";" -a --units=b
> > > --nosuffix -S lv_path=/dev/sdd2 -o
> > > lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size [2021-11-12
> > > 21:23:56,426][ceph_volume.process][INFO  ] Running command: /usr/
> > > bin/lsblk
> 
>  --nodeps -P -o
> 
> > > NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,O
> > > WN
> > > ER
> 
>  ,GROUP,MODE,ALIGNMENT,PHY-
> 
> > > SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,DISC-
> > > ZERO,PKNAME,PARTLABEL /dev/sdd2
> > > [2021-11-12 21:23:56,434][ceph_volume.process][INFO  ] stdout
> > > NAME="sdd2"
> > > KNAME="sdd2" MAJ:MIN="8:50" FSTYPE="" MOUNTPOINT="" LABEL="" UUID=""
> > > RO="0"
> 
>  RM="0" MODEL="" SIZE="1K" STATE="" OWNER="root" GROUP="disk"
> 
> > > MODE="brw-rw----" ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="0"
> > > SCHED="mq-deadline" TYPE="part" DISC-ALN="0" DISC-GRAN="512B"
> > > DISC-MAX="2G"
> 
>  DISC-ZERO="0" PKNAME="sdd" PARTLABEL=""
> 
> > > [2021-11-12 21:23:56,435][ceph_volume.process][INFO  ] Running command:
> > > /usr/ sbin/blkid -c /dev/null -p /dev/sdd2
> > > [2021-11-12 21:23:56,439][ceph_volume.process][INFO  ] stdout
> > > /dev/sdd2:
> > > PTUUID="4808244c" PTTYPE="dos" PART_ENTRY_SCHEME="dos"
> > > PART_ENTRY_UUID="9a4155cb-02" PART_ENTRY_TYPE="0x5"
> > > PART_ENTRY_NUMBER="2"
> > > PART_ENTRY_OFFSET="1001470" PART_ENTRY_SIZE="467859458"
> > > PART_ENTRY_DISK="8:48" [2021-11-12
> > > 21:23:56,440][ceph_volume.process][INFO
> > > ] Running command: /usr/ sbin/pvs --noheadings --readonly --units=b
> > > --nosuffix --separator=";" -o
> > > vg_name,pv_count,lv_count,vg_attr,vg_extent_count,vg_free_count,vg_exten
> > > t_
> > > s
> 
>  ize /dev/sdd2
> 
> > > [2021-11-12 21:23:56,501][ceph_volume.process][INFO  ] stderr Cannot
> > > use
> > > /dev/ sdd2: device is too small (pv_min_size)
> > > [2021-11-12 21:23:56,502][ceph_volume.util.disk][INFO  ] opening device
> > > /dev/ sdd2 to check for BlueStore label
> > > [2021-11-12 21:23:56,502][ceph_volume.util.disk][INFO  ] opening device
> > > /dev/ sdd to check for BlueStore label
> > > [2021-11-12 21:23:56,503][ceph_volume.util.disk][INFO  ] opening device
> > > /dev/ sdd2 to check for BlueStore label
> > > [2021-11-12 21:23:56,503][ceph_volume.util.disk][INFO  ] opening device
> > > /dev/ sdd to check for BlueStore label
> > > [2021-11-12 21:23:56,503][ceph_volume.process][INFO  ] Running command:
> > > /usr/ sbin/udevadm info --query=property /dev/sdd2
> > > [2021-11-12 21:23:56,513][ceph_volume.process][INFO  ] stdout
> > > DEVLINKS=/dev/
> 
>  disk/by-partuuid/9a4155cb-02
> 
> > > /dev/disk/by-id/ata-CT240BX500SSD1_1944E3D4E7BB- part2
> > > /dev/disk/by-path/pci-0000:00:1f.2-ata-5.0-part2 /dev/disk/by-path/
> > > pci-0000:00:1f.2-ata-5-part2
> > > [2021-11-12 21:23:56,513][ceph_volume.process][INFO  ] stdout
> > > DEVNAME=/dev/
> 
>  sdd2
> 
> > > [2021-11-12 21:23:56,513][ceph_volume.process][INFO  ] stdout DEVPATH=/
> > > devices/pci0000:00/0000:00:1f.2/ata5/host4/target4:0:0/4:0:0:0/block/sdd
> > > /s
> > > dd
> 
>  2 [2021-11-12 21:23:56,513][ceph_volume.process][INFO  ] stdout
> 
> > > DEVTYPE=partition
> > > [2021-11-12 21:23:56,513][ceph_volume.process][INFO  ] stdout ID_ATA=1
> > > [2021-11-12 21:23:56,513][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_DOWNLOAD_MICROCODE=1
> > > [2021-11-12 21:23:56,514][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_APM=1
> > > [2021-11-12 21:23:56,514][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_APM_CURRENT_VALUE=254
> > > [2021-11-12 21:23:56,514][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_APM_ENABLED=1
> > > [2021-11-12 21:23:56,514][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_HPA=1
> > > [2021-11-12 21:23:56,514][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_HPA_ENABLED=1
> > > [2021-11-12 21:23:56,514][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_PM=1
> > > [2021-11-12 21:23:56,514][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_PM_ENABLED=1
> > > [2021-11-12 21:23:56,514][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_SECURITY=1
> > > [2021-11-12 21:23:56,514][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_SECURITY_ENABLED=0
> > > [2021-11-12 21:23:56,514][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_SECURITY_ENHANCED_ERASE_UNIT_MIN=2
> > > [2021-11-12 21:23:56,514][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_SECURITY_ERASE_UNIT_MIN=2
> > > [2021-11-12 21:23:56,515][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_SMART=1
> > > [2021-11-12 21:23:56,515][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_SMART_ENABLED=1
> > > [2021-11-12 21:23:56,515][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_ROTATION_RATE_RPM=0
> > > [2021-11-12 21:23:56,515][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_SATA=1
> 
>  [2021-11-12 21:23:56,515][ceph_volume.process][INFO  ]
> 
> > > stdout
> > > ID_ATA_SATA_SIGNAL_RATE_GEN1=1
> > > [2021-11-12 21:23:56,515][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_SATA_SIGNAL_RATE_GEN2=1
> > > [2021-11-12 21:23:56,515][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_WRITE_CACHE=1
> > > [2021-11-12 21:23:56,515][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_WRITE_CACHE_ENABLED=1
> > > [2021-11-12 21:23:56,515][ceph_volume.process][INFO  ] stdout
> > > ID_BUS=ata
> > > [2021-11-12 21:23:56,515][ceph_volume.process][INFO  ] stdout
> > > ID_MODEL=CT240BX500SSD1
> > > [2021-11-12 21:23:56,515][ceph_volume.process][INFO  ] stdout
> > > ID_MODEL_ENC=CT240BX500SSD1\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
> > > x2
> > > 0\
> 
>  x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20 [2021-11-12
> 
> > > 21:23:56,515][ceph_volume.process][INFO  ] stdout
> > > ID_PART_ENTRY_DISK=8:48
> > > [2021-11-12 21:23:56,516][ceph_volume.process][INFO  ] stdout
> > > ID_PART_ENTRY_NUMBER=2
> > > [2021-11-12 21:23:56,516][ceph_volume.process][INFO  ] stdout
> > > ID_PART_ENTRY_OFFSET=1001470
> > > [2021-11-12 21:23:56,516][ceph_volume.process][INFO  ] stdout
> > > ID_PART_ENTRY_SCHEME=dos
> > > [2021-11-12 21:23:56,516][ceph_volume.process][INFO  ] stdout
> > > ID_PART_ENTRY_SIZE=467859458
> > > [2021-11-12 21:23:56,516][ceph_volume.process][INFO  ] stdout
> > > ID_PART_ENTRY_TYPE=0x5
> > > [2021-11-12 21:23:56,516][ceph_volume.process][INFO  ] stdout
> > > ID_PART_ENTRY_UUID=9a4155cb-02
> > > [2021-11-12 21:23:56,516][ceph_volume.process][INFO  ] stdout
> > > ID_PART_TABLE_TYPE=dos
> > > [2021-11-12 21:23:56,516][ceph_volume.process][INFO  ] stdout
> > > ID_PART_TABLE_UUID=4808244c
> > > [2021-11-12 21:23:56,516][ceph_volume.process][INFO  ] stdout
> > > ID_PATH=pci-0000:00:1f.2-ata-5.0
> > > [2021-11-12 21:23:56,516][ceph_volume.process][INFO  ] stdout
> > > ID_PATH_ATA_COMPAT=pci-0000:00:1f.2-ata-5
> > > [2021-11-12 21:23:56,516][ceph_volume.process][INFO  ] stdout
> > > ID_PATH_TAG=pci-0000_00_1f_2-ata-5_0
> > > [2021-11-12 21:23:56,517][ceph_volume.process][INFO  ] stdout
> > > ID_REVISION=M6CR013
> > > [2021-11-12 21:23:56,517][ceph_volume.process][INFO  ] stdout
> > > ID_SERIAL=CT240BX500SSD1_1944E3D4E7BB
> > > [2021-11-12 21:23:56,517][ceph_volume.process][INFO  ] stdout
> > > ID_SERIAL_SHORT=1944E3D4E7BB
> > > [2021-11-12 21:23:56,517][ceph_volume.process][INFO  ] stdout
> > > ID_TYPE=disk
> > > [2021-11-12 21:23:56,517][ceph_volume.process][INFO  ] stdout MAJOR=8
> > > [2021-11-12 21:23:56,517][ceph_volume.process][INFO  ] stdout MINOR=50
> > > [2021-11-12 21:23:56,517][ceph_volume.process][INFO  ] stdout PARTN=2
> > > [2021-11-12 21:23:56,517][ceph_volume.process][INFO  ] stdout
> > > SUBSYSTEM=block [2021-11-12 21:23:56,517][ceph_volume.process][INFO  ]
> > > stdout TAGS=:systemd: [2021-11-12
> > > 21:23:56,517][ceph_volume.process][INFO
> > > ] stdout
> > > USEC_INITIALIZED=4428059
> > > [2021-11-12 21:23:56,518][ceph_volume.process][INFO  ] Running command:
> > > /usr/ sbin/lvs --noheadings --readonly --separator=";" -a --units=b
> > > --nosuffix -S lv_path=/dev/sdd5 -o
> > > lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size [2021-11-12
> > > 21:23:56,570][ceph_volume.process][INFO  ] Running command: /usr/
> > > bin/lsblk
> 
>  --nodeps -P -o
> 
> > > NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,O
> > > WN
> > > ER
> 
>  ,GROUP,MODE,ALIGNMENT,PHY-
> 
> > > SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,DISC-
> > > ZERO,PKNAME,PARTLABEL /dev/sdd5
> > > [2021-11-12 21:23:56,576][ceph_volume.process][INFO  ] stdout
> > > NAME="sdd5"
> > > KNAME="sdd5" MAJ:MIN="8:53" FSTYPE="crypto_LUKS" MOUNTPOINT="" LABEL=""
> > > UUID="fbfc2e93-1c31-469b-80ce-0805c065be6f" RO="0" RM="0" MODEL=""
> > > SIZE="223.1G" STATE="" OWNER="root" GROUP="disk" MODE="brw-rw----"
> > > ALIGNMENT="0" PHY-SEC="512" LOG-SEC="512" ROTA="0" SCHED="mq-deadline"
> > > TYPE="part" DISC-ALN="0" DISC-GRAN="512B" DISC-MAX="2G" DISC-ZERO="0"
> > > PKNAME="sdd" PARTLABEL=""
> > > [2021-11-12 21:23:56,577][ceph_volume.process][INFO  ] Running command:
> > > /usr/ sbin/blkid -c /dev/null -p /dev/sdd5
> > > [2021-11-12 21:23:56,580][ceph_volume.process][INFO  ] stdout
> > > /dev/sdd5:
> > > VERSION="2" UUID="fbfc2e93-1c31-469b-80ce-0805c065be6f"
> > > TYPE="crypto_LUKS"
> > > USAGE="crypto" PART_ENTRY_SCHEME="dos" PART_ENTRY_UUID="9a4155cb-05"
> > > PART_ENTRY_TYPE="0x83" PART_ENTRY_NUMBER="5"
> > > PART_ENTRY_OFFSET="1001472"
> > > PART_ENTRY_SIZE="467859456" PART_ENTRY_DISK="8:48"
> > > [2021-11-12 21:23:56,581][ceph_volume.process][INFO  ] Running command:
> > > /usr/ sbin/pvs --noheadings --readonly --units=b --nosuffix
> > > --separator=";"
> 
>  -o
> 
> > > vg_name,pv_count,lv_count,vg_attr,vg_extent_count,vg_free_count,vg_exten
> > > t_
> > > s
> 
>  ize /dev/sdd5
> 
> > > [2021-11-12 21:23:56,650][ceph_volume.process][INFO  ] stderr Failed to
> > > find
> 
>  physical volume "/dev/sdd5".
> 
> > > [2021-11-12 21:23:56,650][ceph_volume.util.disk][INFO  ] opening device
> > > /dev/ sdd5 to check for BlueStore label
> > > [2021-11-12 21:23:56,651][ceph_volume.util.disk][INFO  ] opening device
> > > /dev/ sdd to check for BlueStore label
> > > [2021-11-12 21:23:56,651][ceph_volume.util.disk][INFO  ] opening device
> > > /dev/ sdd5 to check for BlueStore label
> > > [2021-11-12 21:23:56,651][ceph_volume.util.disk][INFO  ] opening device
> > > /dev/ sdd to check for BlueStore label
> > > [2021-11-12 21:23:56,651][ceph_volume.process][INFO  ] Running command:
> > > /usr/ sbin/udevadm info --query=property /dev/sdd5
> > > [2021-11-12 21:23:56,664][ceph_volume.process][INFO  ] stdout
> > > DEVLINKS=/dev/
> 
>  disk/by-path/pci-0000:00:1f.2-ata-5-part5
> 
> > > /dev/disk/by-partuuid/9a4155cb-05 /
> > > dev/disk/by-id/ata-CT240BX500SSD1_1944E3D4E7BB-part5 /dev/disk/by-uuid/
> > > fbfc2e93-1c31-469b-80ce-0805c065be6f
> > > /dev/disk/by-path/pci-0000:00:1f.2-
> > > ata-5.0-part5
> > > [2021-11-12 21:23:56,664][ceph_volume.process][INFO  ] stdout
> > > DEVNAME=/dev/
> 
>  sdd5
> 
> > > [2021-11-12 21:23:56,664][ceph_volume.process][INFO  ] stdout DEVPATH=/
> > > devices/pci0000:00/0000:00:1f.2/ata5/host4/target4:0:0/4:0:0:0/block/sdd
> > > /s
> > > dd
> 
>  5 [2021-11-12 21:23:56,665][ceph_volume.process][INFO  ] stdout
> 
> > > DEVTYPE=partition
> > > [2021-11-12 21:23:56,665][ceph_volume.process][INFO  ] stdout ID_ATA=1
> > > [2021-11-12 21:23:56,665][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_DOWNLOAD_MICROCODE=1
> > > [2021-11-12 21:23:56,665][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_APM=1
> > > [2021-11-12 21:23:56,665][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_APM_CURRENT_VALUE=254
> > > [2021-11-12 21:23:56,665][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_APM_ENABLED=1
> > > [2021-11-12 21:23:56,665][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_HPA=1
> > > [2021-11-12 21:23:56,665][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_HPA_ENABLED=1
> > > [2021-11-12 21:23:56,665][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_PM=1
> > > [2021-11-12 21:23:56,665][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_PM_ENABLED=1
> > > [2021-11-12 21:23:56,665][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_SECURITY=1
> > > [2021-11-12 21:23:56,665][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_SECURITY_ENABLED=0
> > > [2021-11-12 21:23:56,666][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_SECURITY_ENHANCED_ERASE_UNIT_MIN=2
> > > [2021-11-12 21:23:56,666][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_SECURITY_ERASE_UNIT_MIN=2
> > > [2021-11-12 21:23:56,666][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_SMART=1
> > > [2021-11-12 21:23:56,666][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_SMART_ENABLED=1
> > > [2021-11-12 21:23:56,666][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_ROTATION_RATE_RPM=0
> > > [2021-11-12 21:23:56,666][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_SATA=1
> 
>  [2021-11-12 21:23:56,666][ceph_volume.process][INFO  ]
> 
> > > stdout
> > > ID_ATA_SATA_SIGNAL_RATE_GEN1=1
> > > [2021-11-12 21:23:56,666][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_SATA_SIGNAL_RATE_GEN2=1
> > > [2021-11-12 21:23:56,666][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_WRITE_CACHE=1
> > > [2021-11-12 21:23:56,666][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_WRITE_CACHE_ENABLED=1
> > > [2021-11-12 21:23:56,666][ceph_volume.process][INFO  ] stdout
> > > ID_BUS=ata
> > > [2021-11-12 21:23:56,666][ceph_volume.process][INFO  ] stdout
> > > ID_FS_TYPE=crypto_LUKS
> > > [2021-11-12 21:23:56,667][ceph_volume.process][INFO  ] stdout
> > > ID_FS_USAGE=crypto
> > > [2021-11-12 21:23:56,667][ceph_volume.process][INFO  ] stdout
> > > ID_FS_UUID=fbfc2e93-1c31-469b-80ce-0805c065be6f
> > > [2021-11-12 21:23:56,667][ceph_volume.process][INFO  ] stdout
> > > ID_FS_UUID_ENC=fbfc2e93-1c31-469b-80ce-0805c065be6f
> > > [2021-11-12 21:23:56,667][ceph_volume.process][INFO  ] stdout
> > > ID_FS_VERSION=2 [2021-11-12 21:23:56,667][ceph_volume.process][INFO  ]
> > > stdout
> > > ID_MODEL=CT240BX500SSD1
> > > [2021-11-12 21:23:56,667][ceph_volume.process][INFO  ] stdout
> > > ID_MODEL_ENC=CT240BX500SSD1\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
> > > x2
> > > 0\
> 
>  x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20 [2021-11-12
> 
> > > 21:23:56,667][ceph_volume.process][INFO  ] stdout
> > > ID_PART_ENTRY_DISK=8:48
> > > [2021-11-12 21:23:56,667][ceph_volume.process][INFO  ] stdout
> > > ID_PART_ENTRY_NUMBER=5
> > > [2021-11-12 21:23:56,667][ceph_volume.process][INFO  ] stdout
> > > ID_PART_ENTRY_OFFSET=1001472
> > > [2021-11-12 21:23:56,667][ceph_volume.process][INFO  ] stdout
> > > ID_PART_ENTRY_SCHEME=dos
> > > [2021-11-12 21:23:56,667][ceph_volume.process][INFO  ] stdout
> > > ID_PART_ENTRY_SIZE=467859456
> > > [2021-11-12 21:23:56,667][ceph_volume.process][INFO  ] stdout
> > > ID_PART_ENTRY_TYPE=0x83
> > > [2021-11-12 21:23:56,668][ceph_volume.process][INFO  ] stdout
> > > ID_PART_ENTRY_UUID=9a4155cb-05
> > > [2021-11-12 21:23:56,668][ceph_volume.process][INFO  ] stdout
> > > ID_PART_TABLE_TYPE=dos
> > > [2021-11-12 21:23:56,668][ceph_volume.process][INFO  ] stdout
> > > ID_PART_TABLE_UUID=9a4155cb
> > > [2021-11-12 21:23:56,668][ceph_volume.process][INFO  ] stdout
> > > ID_PATH=pci-0000:00:1f.2-ata-5.0
> > > [2021-11-12 21:23:56,668][ceph_volume.process][INFO  ] stdout
> > > ID_PATH_ATA_COMPAT=pci-0000:00:1f.2-ata-5
> > > [2021-11-12 21:23:56,668][ceph_volume.process][INFO  ] stdout
> > > ID_PATH_TAG=pci-0000_00_1f_2-ata-5_0
> > > [2021-11-12 21:23:56,668][ceph_volume.process][INFO  ] stdout
> > > ID_REVISION=M6CR013
> > > [2021-11-12 21:23:56,668][ceph_volume.process][INFO  ] stdout
> > > ID_SERIAL=CT240BX500SSD1_1944E3D4E7BB
> > > [2021-11-12 21:23:56,668][ceph_volume.process][INFO  ] stdout
> > > ID_SERIAL_SHORT=1944E3D4E7BB
> > > [2021-11-12 21:23:56,668][ceph_volume.process][INFO  ] stdout
> > > ID_TYPE=disk
> > > [2021-11-12 21:23:56,668][ceph_volume.process][INFO  ] stdout MAJOR=8
> > > [2021-11-12 21:23:56,669][ceph_volume.process][INFO  ] stdout MINOR=53
> > > [2021-11-12 21:23:56,669][ceph_volume.process][INFO  ] stdout PARTN=5
> > > [2021-11-12 21:23:56,669][ceph_volume.process][INFO  ] stdout
> > > SUBSYSTEM=block [2021-11-12 21:23:56,669][ceph_volume.process][INFO  ]
> > > stdout TAGS=:systemd: [2021-11-12
> > > 21:23:56,669][ceph_volume.process][INFO
> > > ] stdout
> > > USEC_INITIALIZED=4429765
> > > [2021-11-12 21:23:56,670][ceph_volume.process][INFO  ] Running command:
> > > /usr/ sbin/lvs --noheadings --readonly --separator=";" -a --units=b
> > > --nosuffix -S lv_path=/dev/sdd1 -o
> > > lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size [2021-11-12
> > > 21:23:56,730][ceph_volume.process][INFO  ] Running command: /usr/
> > > bin/lsblk
> 
>  --nodeps -P -o
> 
> > > NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,O
> > > WN
> > > ER
> 
>  ,GROUP,MODE,ALIGNMENT,PHY-
> 
> > > SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,DISC-
> > > ZERO,PKNAME,PARTLABEL /dev/sdd1
> > > [2021-11-12 21:23:56,738][ceph_volume.process][INFO  ] stdout
> > > NAME="sdd1"
> > > KNAME="sdd1" MAJ:MIN="8:49" FSTYPE="ext2" MOUNTPOINT="" LABEL=""
> > > UUID="47c8d1ee-1c50-4af6-8fd5-001583a6f71f" RO="0" RM="0" MODEL=""
> > > SIZE="487M" STATE="" OWNER="root" GROUP="disk" MODE="brw-rw----"
> > > ALIGNMENT="0" PHY- SEC="512" LOG-SEC="512" ROTA="0" SCHED="mq-deadline"
> > > TYPE="part" DISC-ALN="0" DISC-GRAN="512B" DISC-MAX="2G" DISC-ZERO="0"
> > > PKNAME="sdd" PARTLABEL="" [2021-11-12
> > > 21:23:56,738][ceph_volume.process][INFO  ] Running command: /usr/
> > > sbin/blkid -c /dev/null -p /dev/sdd1
> > > [2021-11-12 21:23:56,744][ceph_volume.process][INFO  ] stdout
> > > /dev/sdd1:
> > > UUID="47c8d1ee-1c50-4af6-8fd5-001583a6f71f" VERSION="1.0"
> > > BLOCK_SIZE="1024"
> 
>  TYPE="ext2" USAGE="filesystem" PART_ENTRY_SCHEME="dos"
> 
> > > PART_ENTRY_UUID="9a4155cb-01" PART_ENTRY_TYPE="0x83"
> > > PART_ENTRY_FLAGS="0x80"
> 
>  PART_ENTRY_NUMBER="1" PART_ENTRY_OFFSET="2048"
> 
> > > PART_ENTRY_SIZE="997376" PART_ENTRY_DISK="8:48"
> > > [2021-11-12 21:23:56,745][ceph_volume.process][INFO  ] Running command:
> > > /usr/ sbin/pvs --noheadings --readonly --units=b --nosuffix
> > > --separator=";"
> 
>  -o
> 
> > > vg_name,pv_count,lv_count,vg_attr,vg_extent_count,vg_free_count,vg_exten
> > > t_
> > > s
> 
>  ize /dev/sdd1
> 
> > > [2021-11-12 21:23:56,814][ceph_volume.process][INFO  ] stderr Failed to
> > > find
> 
>  physical volume "/dev/sdd1".
> 
> > > [2021-11-12 21:23:56,814][ceph_volume.util.disk][INFO  ] opening device
> > > /dev/ sdd1 to check for BlueStore label
> > > [2021-11-12 21:23:56,815][ceph_volume.util.disk][INFO  ] opening device
> > > /dev/ sdd to check for BlueStore label
> > > [2021-11-12 21:23:56,815][ceph_volume.util.disk][INFO  ] opening device
> > > /dev/ sdd1 to check for BlueStore label
> > > [2021-11-12 21:23:56,815][ceph_volume.util.disk][INFO  ] opening device
> > > /dev/ sdd to check for BlueStore label
> > > [2021-11-12 21:23:56,815][ceph_volume.process][INFO  ] Running command:
> > > /usr/ sbin/udevadm info --query=property /dev/sdd1
> > > [2021-11-12 21:23:56,828][ceph_volume.process][INFO  ] stdout
> > > DEVLINKS=/dev/
> 
>  disk/by-id/ata-CT240BX500SSD1_1944E3D4E7BB-part1
> 
> > > /dev/disk/by-path/ pci-0000:00:1f.2-ata-5.0-part1
> > > /dev/disk/by-path/pci-0000:00:1f.2-ata-5-part1
> > > /dev/disk/by-uuid/47c8d1ee-1c50-4af6-8fd5-001583a6f71f
> > > /dev/disk/by-partuuid/ 9a4155cb-01
> > > [2021-11-12 21:23:56,828][ceph_volume.process][INFO  ] stdout
> > > DEVNAME=/dev/
> 
>  sdd1
> 
> > > [2021-11-12 21:23:56,828][ceph_volume.process][INFO  ] stdout DEVPATH=/
> > > devices/pci0000:00/0000:00:1f.2/ata5/host4/target4:0:0/4:0:0:0/block/sdd
> > > /s
> > > dd
> 
>  1 [2021-11-12 21:23:56,828][ceph_volume.process][INFO  ] stdout
> 
> > > DEVTYPE=partition
> > > [2021-11-12 21:23:56,829][ceph_volume.process][INFO  ] stdout ID_ATA=1
> > > [2021-11-12 21:23:56,829][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_DOWNLOAD_MICROCODE=1
> > > [2021-11-12 21:23:56,829][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_APM=1
> > > [2021-11-12 21:23:56,829][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_APM_CURRENT_VALUE=254
> > > [2021-11-12 21:23:56,829][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_APM_ENABLED=1
> > > [2021-11-12 21:23:56,829][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_HPA=1
> > > [2021-11-12 21:23:56,829][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_HPA_ENABLED=1
> > > [2021-11-12 21:23:56,829][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_PM=1
> > > [2021-11-12 21:23:56,829][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_PM_ENABLED=1
> > > [2021-11-12 21:23:56,829][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_SECURITY=1
> > > [2021-11-12 21:23:56,829][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_SECURITY_ENABLED=0
> > > [2021-11-12 21:23:56,830][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_SECURITY_ENHANCED_ERASE_UNIT_MIN=2
> > > [2021-11-12 21:23:56,830][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_SECURITY_ERASE_UNIT_MIN=2
> > > [2021-11-12 21:23:56,830][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_SMART=1
> > > [2021-11-12 21:23:56,830][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_SMART_ENABLED=1
> > > [2021-11-12 21:23:56,830][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_ROTATION_RATE_RPM=0
> > > [2021-11-12 21:23:56,830][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_SATA=1
> 
>  [2021-11-12 21:23:56,830][ceph_volume.process][INFO  ]
> 
> > > stdout
> > > ID_ATA_SATA_SIGNAL_RATE_GEN1=1
> > > [2021-11-12 21:23:56,830][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_SATA_SIGNAL_RATE_GEN2=1
> > > [2021-11-12 21:23:56,830][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_WRITE_CACHE=1
> > > [2021-11-12 21:23:56,830][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_WRITE_CACHE_ENABLED=1
> > > [2021-11-12 21:23:56,830][ceph_volume.process][INFO  ] stdout
> > > ID_BUS=ata
> > > [2021-11-12 21:23:56,830][ceph_volume.process][INFO  ] stdout
> > > ID_FS_TYPE=ext2 [2021-11-12 21:23:56,831][ceph_volume.process][INFO  ]
> > > stdout
> > > ID_FS_USAGE=filesystem
> > > [2021-11-12 21:23:56,831][ceph_volume.process][INFO  ] stdout
> > > ID_FS_UUID=47c8d1ee-1c50-4af6-8fd5-001583a6f71f
> > > [2021-11-12 21:23:56,831][ceph_volume.process][INFO  ] stdout
> > > ID_FS_UUID_ENC=47c8d1ee-1c50-4af6-8fd5-001583a6f71f
> > > [2021-11-12 21:23:56,831][ceph_volume.process][INFO  ] stdout
> > > ID_FS_VERSION=1.0
> > > [2021-11-12 21:23:56,831][ceph_volume.process][INFO  ] stdout
> > > ID_MODEL=CT240BX500SSD1
> > > [2021-11-12 21:23:56,831][ceph_volume.process][INFO  ] stdout
> > > ID_MODEL_ENC=CT240BX500SSD1\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
> > > x2
> > > 0\
> 
>  x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20 [2021-11-12
> 
> > > 21:23:56,831][ceph_volume.process][INFO  ] stdout
> > > ID_PART_ENTRY_DISK=8:48
> > > [2021-11-12 21:23:56,831][ceph_volume.process][INFO  ] stdout
> > > ID_PART_ENTRY_FLAGS=0x80
> > > [2021-11-12 21:23:56,831][ceph_volume.process][INFO  ] stdout
> > > ID_PART_ENTRY_NUMBER=1
> > > [2021-11-12 21:23:56,831][ceph_volume.process][INFO  ] stdout
> > > ID_PART_ENTRY_OFFSET=2048
> > > [2021-11-12 21:23:56,831][ceph_volume.process][INFO  ] stdout
> > > ID_PART_ENTRY_SCHEME=dos
> > > [2021-11-12 21:23:56,832][ceph_volume.process][INFO  ] stdout
> > > ID_PART_ENTRY_SIZE=997376
> > > [2021-11-12 21:23:56,832][ceph_volume.process][INFO  ] stdout
> > > ID_PART_ENTRY_TYPE=0x83
> > > [2021-11-12 21:23:56,832][ceph_volume.process][INFO  ] stdout
> > > ID_PART_ENTRY_UUID=9a4155cb-01
> > > [2021-11-12 21:23:56,832][ceph_volume.process][INFO  ] stdout
> > > ID_PART_TABLE_TYPE=dos
> > > [2021-11-12 21:23:56,832][ceph_volume.process][INFO  ] stdout
> > > ID_PART_TABLE_UUID=9a4155cb
> > > [2021-11-12 21:23:56,832][ceph_volume.process][INFO  ] stdout
> > > ID_PATH=pci-0000:00:1f.2-ata-5.0
> > > [2021-11-12 21:23:56,832][ceph_volume.process][INFO  ] stdout
> > > ID_PATH_ATA_COMPAT=pci-0000:00:1f.2-ata-5
> > > [2021-11-12 21:23:56,832][ceph_volume.process][INFO  ] stdout
> > > ID_PATH_TAG=pci-0000_00_1f_2-ata-5_0
> > > [2021-11-12 21:23:56,832][ceph_volume.process][INFO  ] stdout
> > > ID_REVISION=M6CR013
> > > [2021-11-12 21:23:56,832][ceph_volume.process][INFO  ] stdout
> > > ID_SERIAL=CT240BX500SSD1_1944E3D4E7BB
> > > [2021-11-12 21:23:56,832][ceph_volume.process][INFO  ] stdout
> > > ID_SERIAL_SHORT=1944E3D4E7BB
> > > [2021-11-12 21:23:56,832][ceph_volume.process][INFO  ] stdout
> > > ID_TYPE=disk
> > > [2021-11-12 21:23:56,833][ceph_volume.process][INFO  ] stdout MAJOR=8
> > > [2021-11-12 21:23:56,833][ceph_volume.process][INFO  ] stdout MINOR=49
> > > [2021-11-12 21:23:56,833][ceph_volume.process][INFO  ] stdout PARTN=1
> > > [2021-11-12 21:23:56,833][ceph_volume.process][INFO  ] stdout
> > > SUBSYSTEM=block [2021-11-12 21:23:56,833][ceph_volume.process][INFO  ]
> > > stdout TAGS=:systemd: [2021-11-12
> > > 21:23:56,833][ceph_volume.process][INFO
> > > ] stdout
> > > USEC_INITIALIZED=4437580
> > > [2021-11-12 21:23:56,833][ceph_volume.util.disk][INFO  ] opening device
> > > /dev/ sdd to check for BlueStore label
> > > [2021-11-12 21:23:56,834][ceph_volume.process][INFO  ] Running command:
> > > /usr/ sbin/udevadm info --query=property /dev/sdd
> > > [2021-11-12 21:23:56,846][ceph_volume.process][INFO  ] stdout
> > > DEVLINKS=/dev/
> 
>  disk/by-path/pci-0000:00:1f.2-ata-5.0
> 
> > > /dev/disk/by-path/pci-0000:00:1f.2-ata-5
> > > /dev/disk/by-id/ata-CT240BX500SSD1_1944E3D4E7BB
> > > [2021-11-12 21:23:56,846][ceph_volume.process][INFO  ] stdout
> > > DEVNAME=/dev/sdd [2021-11-12 21:23:56,847][ceph_volume.process][INFO  ]
> > > stdout DEVPATH=/
> > > devices/pci0000:00/0000:00:1f.2/ata5/host4/target4:0:0/4:0:0:0/block/sdd
> > > 
> > > [2021-11-12 21:23:56,847][ceph_volume.process][INFO  ] stdout
> > > DEVTYPE=disk
> > > [2021-11-12 21:23:56,847][ceph_volume.process][INFO  ] stdout ID_ATA=1
> > > [2021-11-12 21:23:56,847][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_DOWNLOAD_MICROCODE=1
> > > [2021-11-12 21:23:56,847][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_APM=1
> > > [2021-11-12 21:23:56,847][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_APM_CURRENT_VALUE=254
> > > [2021-11-12 21:23:56,847][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_APM_ENABLED=1
> > > [2021-11-12 21:23:56,847][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_HPA=1
> > > [2021-11-12 21:23:56,847][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_HPA_ENABLED=1
> > > [2021-11-12 21:23:56,847][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_PM=1
> > > [2021-11-12 21:23:56,847][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_PM_ENABLED=1
> > > [2021-11-12 21:23:56,847][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_SECURITY=1
> > > [2021-11-12 21:23:56,848][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_SECURITY_ENABLED=0
> > > [2021-11-12 21:23:56,848][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_SECURITY_ENHANCED_ERASE_UNIT_MIN=2
> > > [2021-11-12 21:23:56,848][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_SECURITY_ERASE_UNIT_MIN=2
> > > [2021-11-12 21:23:56,848][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_SMART=1
> > > [2021-11-12 21:23:56,848][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_FEATURE_SET_SMART_ENABLED=1
> > > [2021-11-12 21:23:56,848][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_ROTATION_RATE_RPM=0
> > > [2021-11-12 21:23:56,848][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_SATA=1
> 
>  [2021-11-12 21:23:56,848][ceph_volume.process][INFO  ]
> 
> > > stdout
> > > ID_ATA_SATA_SIGNAL_RATE_GEN1=1
> > > [2021-11-12 21:23:56,848][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_SATA_SIGNAL_RATE_GEN2=1
> > > [2021-11-12 21:23:56,848][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_WRITE_CACHE=1
> > > [2021-11-12 21:23:56,848][ceph_volume.process][INFO  ] stdout
> > > ID_ATA_WRITE_CACHE_ENABLED=1
> > > [2021-11-12 21:23:56,848][ceph_volume.process][INFO  ] stdout
> > > ID_BUS=ata
> > > [2021-11-12 21:23:56,849][ceph_volume.process][INFO  ] stdout
> > > ID_MODEL=CT240BX500SSD1
> > > [2021-11-12 21:23:56,849][ceph_volume.process][INFO  ] stdout
> > > ID_MODEL_ENC=CT240BX500SSD1\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\
> > > x2
> > > 0\
> 
>  x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20 [2021-11-12
> 
> > > 21:23:56,849][ceph_volume.process][INFO  ] stdout
> > > ID_PART_TABLE_TYPE=dos
> > > [2021-11-12 21:23:56,849][ceph_volume.process][INFO  ] stdout
> > > ID_PART_TABLE_UUID=9a4155cb
> > > [2021-11-12 21:23:56,849][ceph_volume.process][INFO  ] stdout
> > > ID_PATH=pci-0000:00:1f.2-ata-5.0
> > > [2021-11-12 21:23:56,849][ceph_volume.process][INFO  ] stdout
> > > ID_PATH_ATA_COMPAT=pci-0000:00:1f.2-ata-5
> > > [2021-11-12 21:23:56,849][ceph_volume.process][INFO  ] stdout
> > > ID_PATH_TAG=pci-0000_00_1f_2-ata-5_0
> > > [2021-11-12 21:23:56,849][ceph_volume.process][INFO  ] stdout
> > > ID_REVISION=M6CR013
> > > [2021-11-12 21:23:56,849][ceph_volume.process][INFO  ] stdout
> > > ID_SERIAL=CT240BX500SSD1_1944E3D4E7BB
> > > [2021-11-12 21:23:56,849][ceph_volume.process][INFO  ] stdout
> > > ID_SERIAL_SHORT=1944E3D4E7BB
> > > [2021-11-12 21:23:56,849][ceph_volume.process][INFO  ] stdout
> > > ID_TYPE=disk
> > > [2021-11-12 21:23:56,849][ceph_volume.process][INFO  ] stdout MAJOR=8
> > > [2021-11-12 21:23:56,849][ceph_volume.process][INFO  ] stdout MINOR=48
> > > [2021-11-12 21:23:56,850][ceph_volume.process][INFO  ] stdout
> > > SUBSYSTEM=block [2021-11-12 21:23:56,850][ceph_volume.process][INFO  ]
> > > stdout TAGS=:systemd: [2021-11-12
> > > 21:23:56,850][ceph_volume.process][INFO
> > > ] stdout
> > > USEC_INITIALIZED=4425899
> > > [2021-11-12 21:23:56,854][ceph_volume.util.system][INFO  ] /dev/sda was
> > > not
> 
>  found as mounted
> 
> > > [2021-11-12 21:23:56,861][ceph_volume.util.system][INFO  ] /dev/sdb was
> > > not
> 
>  found as mounted
> 
> > > [2021-11-12 21:23:56,865][ceph_volume.util.system][INFO  ] /dev/sdc was
> > > not
> 
>  found as mounted
> 
> > >
> > >
> > >
> > >
> > > On Friday, 12 November 2021 21:13:05 GMT Igor Fedotov wrote:
> > > 
> > > 
> > > > Hi Stephen,
> > > >
> > > >
> > > >
> > > >
> > > >
> > > > it would be nice to see failing OSD startup log...
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > > Thanks,
> > > >
> > > >
> > > >
> > > >
> > > >
> > > > Igor
> > > >
> > > >
> > > >
> > > >
> > > >
> > > > On 11/12/2021 11:37 PM, Stephen J. Thompson wrote:
> > > > 
> > > > 
> > > > > Before shutting down
> > >
> > >
> > >
> > >
> > >
> > > _______________________________________________
> > > ceph-users mailing list -- ceph-users@xxxxxxx
> > > To unsubscribe send an email to ceph-users-leave@xxxxxxx
> > 
> > 
> > 
> > 
> > 
> > 
> > _______________________________________________
> > ceph-users mailing list -- ceph-users@xxxxxxx
> > To unsubscribe send an email to ceph-users-leave@xxxxxxx
> > 
> > _______________________________________________
> > ceph-users mailing list -- ceph-users@xxxxxxx
> > To unsubscribe send an email to ceph-users-leave@xxxxxxx
> 
> 
> 
> 
> 
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx




_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux