Re: tcmu-runner crashing on 16.2.5

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



If the tcmu-runner daemon is died, the above logs are expected. So we need to know what has caused the tcmu-runner service's crash.

Xiubo


Thanks for the response Xiubo. How can I go about figuring out why the tsmu-runner daemon has died? Are there any logs I can pull that will give insight into why it’s happening?

-Paul




-Paul


On Aug 25, 2021, at 2:44 PM, Paul Giralt (pgiralt) <pgiralt@xxxxxxxxx<mailto:pgiralt@xxxxxxxxx>> wrote:

Ilya / Xiubo,

The problem just re-occurred on one server and I ran the systemctl status command. You can see there are no tcmu-runner processes listed:

[root@cxcto-c240-j27-04 ~]# systemctl status
● cxcto-c240-j27-04.cisco.com<http://cxcto-c240-j27-04.cisco.com/>
   State: running
    Jobs: 0 queued
  Failed: 0 units
   Since: Wed 2021-08-25 01:26:00 EDT; 13h ago
  CGroup: /
          ├─docker
          │ ├─1c794e4dc591d5cf33318364c27d59dc9106418ca20d484d61cffc9f7168d691
          │ │ ├─6200 /sbin/docker-init -- /usr/bin/ceph-osd -n osd.32 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug
          │ │ └─6305 /usr/bin/ceph-osd -n osd.32 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug
          │ ├─3193fec6cab38d1276667d5ddd9c07365bc0c124841cececf7238b59beefb959
          │ │ ├─6080 /sbin/docker-init -- /usr/bin/ceph-osd -n osd.142 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug
          │ │ └─6259 /usr/bin/ceph-osd -n osd.142 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug
          │ ├─5043c4101899e723565dfe7e3bb3f869e518d1c19f83de64fceed3852c841da6
          │ │ ├─4204 /sbin/docker-init -- /usr/bin/rbd-target-api
          │ │ └─4331 /usr/bin/python3.6 -s /usr/bin/rbd-target-api
          │ ├─f872a81eade4f937e068fe0317681f477ed5c32d6fa1727f5f1558b3e784bcdb
          │ │ ├─6148 /sbin/docker-init -- /usr/bin/ceph-osd -n osd.162 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug
          │ │ └─6272 /usr/bin/ceph-osd -n osd.162 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug
          │ ├─8283765ab59f1aa6f8e55937c222bdd551dc9fc80d0ce7721120dd41c73ae5ba
          │ │ ├─6217 /sbin/docker-init -- /usr/bin/ceph-osd -n osd.100 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug
          │ │ └─6336 /usr/bin/ceph-osd -n osd.100 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug
          │ ├─ea560d46401771e5a3cb2d1934f932dc2b5d96cc23c42556205abbf9bd719b84
          │ │ ├─7236 /sbin/docker-init -- /usr/bin/ceph-osd -n osd.212 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug
          │ │ └─7422 /usr/bin/ceph-osd -n osd.212 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug
          │ ├─061d8b9e71bfda52f5b0a3627bd9da7d4cf0d3950fc29339841d31db2f91a84e
          │ │ ├─7254 /sbin/docker-init -- /usr/bin/ceph-osd -n osd.91 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug
          │ │ └─7421 /usr/bin/ceph-osd -n osd.91 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug
          │ ├─847855f0b5a60758e0e396b7f4d1884474712526fe1be33f05f2dd798269242c
          │ │ ├─7286 /sbin/docker-init -- /usr/bin/ceph-osd -n osd.110 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug
          │ │ └─7425 /usr/bin/ceph-osd -n osd.110 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug
          │ ├─13f8e85206f1fe1da978d72052b00eb5386ec27cfa8137609e40f48474420422
          │ │ ├─6227 /sbin/docker-init -- /usr/bin/ceph-osd -n osd.21 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug
          │ │ └─6335 /usr/bin/ceph-osd -n osd.21 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug
          │ ├─ab34680a337eaa3014bfa4495b716f42eb98481fb27eb625626ca76965ad8ee1
          │ │ ├─7233 /sbin/docker-init -- /usr/bin/ceph-osd -n osd.53 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug
          │ │ └─7420 /usr/bin/ceph-osd -n osd.53 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug
          │ ├─8a4bc8a41f3f3424ca95bd5710492022484ef4f00e97c1ee9d013577c5752f66
          │ │ ├─7217 /sbin/docker-init -- /usr/bin/ceph-osd -n osd.121 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug
          │ │ └─7345 /usr/bin/ceph-osd -n osd.121 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug
          │ ├─cfcb00e54695352640fca07c8ba35fead458926decff0174e71a98b14f2c4cbe
          │ │ ├─7360 /sbin/docker-init -- /usr/bin/ceph-osd -n osd.81 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug
          │ │ └─7485 /usr/bin/ceph-osd -n osd.81 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug
          │ ├─4325c4ddb46f7e9bf60aa68c256b006773a1c85d1cc8774d05a1c90afb78f277
          │ │ ├─6191 /sbin/docker-init -- /usr/bin/ceph-osd -n osd.172 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug
          │ │ └─6306 /usr/bin/ceph-osd -n osd.172 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug
          │ ├─e999424158da7fca407d7eed338b72510f710caa02635ba21e3693ee92f77872
          │ │ ├─7218 /sbin/docker-init -- /usr/bin/ceph-osd -n osd.2 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug
          │ │ └─7344 /usr/bin/ceph-osd -n osd.2 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug
          │ ├─aaac1b0c09a91d3e76ddf86f6555337535fc147db0e613467b83192461664af9
          │ │ ├─6235 /sbin/docker-init -- /usr/bin/ceph-osd -n osd.61 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug
          │ │ └─6337 /usr/bin/ceph-osd -n osd.61 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug
          │ ├─477965614a61ff9f18b8675fe1bc74b841b24d2d4bcca9b3d05e14610a829f8d
          │ │ ├─6205 /sbin/docker-init -- /usr/bin/ceph-osd -n osd.12 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug
          │ │ └─6307 /usr/bin/ceph-osd -n osd.12 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug
          │ ├─c1706b42230471440b21696f92319059b68f1cd641ce6f08c3f6ecc9e035d781
          │ │ ├─6189 /sbin/docker-init -- /usr/bin/ceph-osd -n osd.202 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug
          │ │ └─6304 /usr/bin/ceph-osd -n osd.202 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug
          │ ├─04e8411b50f51b31267f0a661694667195c0f1efc0e80dfcf86386aa0ca78041
          │ │ ├─7237 /sbin/docker-init -- /usr/bin/ceph-osd -n osd.71 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug
          │ │ └─7423 /usr/bin/ceph-osd -n osd.71 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug
          │ ├─c94dc348fd2a416596a694b0dd58f189353e800fece93a80b13544f4a5ea9912
          │ │ ├─7362 /sbin/docker-init -- /usr/bin/ceph-osd -n osd.42 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug
          │ │ └─7491 /usr/bin/ceph-osd -n osd.42 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug
          │ ├─466d742e022bbc2edbc76c40d8e0c37950d3aa1d0c61e972ba72250ff2ddd848
          │ │ ├─4015 /sbin/docker-init -- /bin/node_exporter --no-collector.timex --web.listen-address=:9100
          │ │ └─4340 /bin/node_exporter --no-collector.timex --web.listen-address=:9100
          │ ├─0676b14c15cfb853adbdc21770368820041215c7e7029f553daf87a727e1dd68
          │ │ ├─6147 /sbin/docker-init -- /usr/bin/ceph-osd -n osd.132 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug
          │ │ └─6271 /usr/bin/ceph-osd -n osd.132 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug
          │ ├─9ebb20b23780637cbc397a831c782d4fdecc6fdd16ef7817f0e1c54ccd01a34c
          │ │ ├─6247 /sbin/docker-init -- /usr/bin/ceph-osd -n osd.182 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug
          │ │ └─6338 /usr/bin/ceph-osd -n osd.182 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug
          │ ├─c19bd235dfd05e111cbe7d19c78cba5a0e94aba6441c5302405e081518e67a6a
          │ │ ├─7262 /sbin/docker-init -- /usr/bin/ceph-osd -n osd.152 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug
          │ │ └─7424 /usr/bin/ceph-osd -n osd.152 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug
          │ ├─66019a2f595f9ecd560a5c58c4ef07e795599f9375bdc964125378b4ce804c09
          │ │ ├─19133 /sbin/docker-init -- /usr/bin/ceph-crash -n client.crash.cxcto-c240-j27-04
          │ │ └─19149 /usr/libexec/platform-python -s /usr/bin/ceph-crash -n client.crash.cxcto-c240-j27-04
          │ └─25510a6f503e80bd36500ad7438eb3eb1efa3e8b2d4497c51998bc3bda695673
          │   ├─6076 /sbin/docker-init -- /usr/bin/ceph-osd -n osd.192 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug
          │   └─6258 /usr/bin/ceph-osd -n osd.192 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug
          ├─user.slice
          │ └─user-0.slice
          │   ├─session-6.scope
          │   │ ├─235063 sshd: root [priv]
          │   │ ├─235080 sshd: root@pts/0
          │   │ ├─235081 -bash
          │   │ ├─235765 systemctl status
          │   │ └─235766 less
          │   ├─session-1.scope
          │   │ ├─17919 sshd: root [priv]
          │   │ ├─17941 sshd: root@notty
          │   │ └─17942 python3 -c import sys;exec(eval(sys.stdin.readline()))
          │   └─user@0.service<mailto:user@0.service>
          │     └─init.scope
          │       ├─17923 /usr/lib/systemd/systemd --user
          │       └─17927 (sd-pam)
          ├─init.scope
          │ └─1 /usr/lib/systemd/systemd --switched-root --system --deserialize 17
          └─system.slice
            ├─rngd.service
            │ └─2330 /sbin/rngd -f --fill-watermark=0
            ├─irqbalance.service
            │ └─2319 /usr/sbin/irqbalance --foreground
            ├─containerd.service
            │ ├─ 2410 /usr/bin/containerd
            │ ├─ 3661 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 466d742e022bbc2edbc76c40d8e0c37950d3aa1d0c61e972ba72250ff2ddd848 -address /run/containerd/containerd.sock
            │ ├─ 4052 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 5043c4101899e723565dfe7e3bb3f869e518d1c19f83de64fceed3852c841da6 -address /run/containerd/containerd.sock
            │ ├─ 5830 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 25510a6f503e80bd36500ad7438eb3eb1efa3e8b2d4497c51998bc3bda695673 -address /run/containerd/containerd.sock
            │ ├─ 5844 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 3193fec6cab38d1276667d5ddd9c07365bc0c124841cececf7238b59beefb959 -address /run/containerd/containerd.sock
            │ ├─ 5846 /usr/bin/containerd-shim-runc-v2 -namespace moby -id f872a81eade4f937e068fe0317681f477ed5c32d6fa1727f5f1558b3e784bcdb -address /run/containerd/containerd.sock
            │ ├─ 5899 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 0676b14c15cfb853adbdc21770368820041215c7e7029f553daf87a727e1dd68 -address /run/containerd/containerd.sock
            │ ├─ 5926 /usr/bin/containerd-shim-runc-v2 -namespace moby -id c1706b42230471440b21696f92319059b68f1cd641ce6f08c3f6ecc9e035d781 -address /run/containerd/containerd.sock
            │ ├─ 5965 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 4325c4ddb46f7e9bf60aa68c256b006773a1c85d1cc8774d05a1c90afb78f277 -address /run/containerd/containerd.sock
            │ ├─ 5970 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 1c794e4dc591d5cf33318364c27d59dc9106418ca20d484d61cffc9f7168d691 -address /run/containerd/containerd.sock
            │ ├─ 6000 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 8283765ab59f1aa6f8e55937c222bdd551dc9fc80d0ce7721120dd41c73ae5ba -address /run/containerd/containerd.sock
            │ ├─ 6009 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 477965614a61ff9f18b8675fe1bc74b841b24d2d4bcca9b3d05e14610a829f8d -address /run/containerd/containerd.sock
            │ ├─ 6087 /usr/bin/containerd-shim-runc-v2 -namespace moby -id aaac1b0c09a91d3e76ddf86f6555337535fc147db0e613467b83192461664af9 -address /run/containerd/containerd.sock
            │ ├─ 6091 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 13f8e85206f1fe1da978d72052b00eb5386ec27cfa8137609e40f48474420422 -address /run/containerd/containerd.sock
            │ ├─ 6122 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 9ebb20b23780637cbc397a831c782d4fdecc6fdd16ef7817f0e1c54ccd01a34c -address /run/containerd/containerd.sock
            │ ├─ 7014 /usr/bin/containerd-shim-runc-v2 -namespace moby -id e999424158da7fca407d7eed338b72510f710caa02635ba21e3693ee92f77872 -address /run/containerd/containerd.sock
            │ ├─ 7015 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 8a4bc8a41f3f3424ca95bd5710492022484ef4f00e97c1ee9d013577c5752f66 -address /run/containerd/containerd.sock
            │ ├─ 7059 /usr/bin/containerd-shim-runc-v2 -namespace moby -id ab34680a337eaa3014bfa4495b716f42eb98481fb27eb625626ca76965ad8ee1 -address /run/containerd/containerd.sock
            │ ├─ 7079 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 04e8411b50f51b31267f0a661694667195c0f1efc0e80dfcf86386aa0ca78041 -address /run/containerd/containerd.sock
            │ ├─ 7082 /usr/bin/containerd-shim-runc-v2 -namespace moby -id ea560d46401771e5a3cb2d1934f932dc2b5d96cc23c42556205abbf9bd719b84 -address /run/containerd/containerd.sock
            │ ├─ 7127 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 061d8b9e71bfda52f5b0a3627bd9da7d4cf0d3950fc29339841d31db2f91a84e -address /run/containerd/containerd.sock
            │ ├─ 7148 /usr/bin/containerd-shim-runc-v2 -namespace moby -id c19bd235dfd05e111cbe7d19c78cba5a0e94aba6441c5302405e081518e67a6a -address /run/containerd/containerd.sock
            │ ├─ 7193 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 847855f0b5a60758e0e396b7f4d1884474712526fe1be33f05f2dd798269242c -address /run/containerd/containerd.sock
            │ ├─ 7287 /usr/bin/containerd-shim-runc-v2 -namespace moby -id c94dc348fd2a416596a694b0dd58f189353e800fece93a80b13544f4a5ea9912 -address /run/containerd/containerd.sock
            │ ├─ 7290 /usr/bin/containerd-shim-runc-v2 -namespace moby -id cfcb00e54695352640fca07c8ba35fead458926decff0174e71a98b14f2c4cbe -address /run/containerd/containerd.sock
            │ └─19112 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 66019a2f595f9ecd560a5c58c4ef07e795599f9375bdc964125378b4ce804c09 -address /run/containerd/containerd.sock
            ├─systemd-udevd.service
            │ └─1749 /usr/lib/systemd/systemd-udevd
            ├─docker.service
            │ └─2820 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
            ├─polkit.service
            │ └─2327 /usr/lib/polkit-1/polkitd --no-debug
            ├─chronyd.service
            │ └─2338 /usr/sbin/chronyd
            ├─auditd.service
            │ └─2287 /sbin/auditd
            ├─tuned.service
            │ └─2403 /usr/libexec/platform-python -Es /usr/sbin/tuned -l -P
            ├─systemd-journald.service
            │ └─1711 /usr/lib/systemd/systemd-journald
            ├─sshd.service
            │ └─2393 /usr/sbin/sshd -D -oCiphers=aes256-gcm@xxxxxxxxxxx<mailto:oCiphers=aes256-gcm@xxxxxxxxxxx>,chacha20-poly1305@xxxxxxxxxxx<mailto:chacha20-poly1305@xxxxxxxxxxx>,aes256-ctr,aes256-cbc,aes128-gcm@xxxxxxxxxxx<mailto:aes128-gcm@xxxxxxxxxxx>,aes128-ctr,aes128-cbc -oMACs=hmac-sha2-256-etm@xxxxxxxxxxx<mailto:oMACs=hmac-sha2-256-etm@xxxxxxxxxxx>,hmac-sha1-etm@xxxxxxxxxxx<mailto:hmac-sha1-etm@xxxxxxxxxxx>,umac-128-etm@xxxxxxxxxxx<mailto:umac-128-etm@xxxxxxxxxxx>,hmac-sha2-512-etm@xxxxxxxxxxx<mailto:hmac-sha2-512-etm@xxxxxxxxxxx>,hmac-sha2-256,hmac-sha1,umac-128@xxxxxxxxxxx<mailto:umac-128@xxxxxxxxxxx>,hmac-sha2-512 -oGSSAPIKexAlgorithms=gss-curve25519-sha256-,gss-nistp256-sha256-,gss-group14-sha256-,gss-group16-sha512-,gss-gex-sha1-,gss-group14-sha1- -oKexAlgorithms=curve25519-sha256,curve25519-sha256@xxxxxxxxxx<mailto:curve25519-sha256@xxxxxxxxxx>,ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha256,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1 -oHostKeyAlgorithms=ecdsa-sha2-nistp256,ecdsa-sha2-nistp256-cert-v01@xxxxxxxxxxx<mailto:ecdsa-sha2-nistp256-cert-v01@xxxxxxxxxxx>,ecdsa-sha2-nistp384,ecdsa-sha2-nistp384-cert-v01@xxxxxxxxxxx<mailto:ecdsa-sha2-nistp384-cert-v01@xxxxxxxxxxx>,ecdsa-sha2-nistp521,ecdsa-sha2-nistp521-cert-v01@xxxxxxxxxxx<mailto:ecdsa-sha2-nistp521-cert-v01@xxxxxxxxxxx>,ssh-ed25519,ssh-ed25519-cert-v01@xxxxxxxxxxx<mailto:ssh-ed25519-cert-v01@xxxxxxxxxxx>,rsa-sha2-256,rsa-sha2-256-cert-v01@xxxxxxxxxxx<mailto:rsa-sha2-256-cert-v01@xxxxxxxxxxx>,rsa-sha2-512,rsa-sha2-512-cert-v01@xxxxxxxxxxx<mailto:rsa-sha2-512-cert-v01@xxxxxxxxxxx>,ssh-rsa,ssh-rsa-cert-v01@xxxxxxxxxxx<mailto:ssh-rsa-cert-v01@xxxxxxxxxxx> -oPubkeyAcceptedKeyTypes=ecdsa-sha2-nistp256,ecdsa-sha2-nistp256-cert-v01@xxxxxxxxxxx<mailto:ecdsa-sha2-nistp256-cert-v01@xxxxxxxxxxx>,ecdsa-sha2-nistp384,ecdsa-sha2-nistp384-cert-v01@xxxxxxxxxxx<mailto:ecdsa-sha2-nistp384-cert-v01@open
ssh.com>,ecdsa-sha2-nistp521,ecdsa-sha2-nistp521-cert-v01@xxxxxxxxxxx<mailto:ecdsa-sha2-nistp521-cert-v01@xxxxxxxxxxx>,ssh-ed25519,ssh-ed25519-cert-v01@xxxxxxxxxxx<mailto:ssh-ed25519-cert-v01@xxxxxxxxxxx>,rsa-sha2-256,rsa-sha2-256-cert-v01@xxxxxxxxxxx<mailto:rsa-sha2-256-cert-v01@xxxxxxxxxxx>,rsa-sha2-512,rsa-sha2-512-cert-v01@xxxxxxxxxxx<mailto:rsa-sha2-512-cert-v01@xxxxxxxxxxx>,ssh-rsa,ssh-rsa-cert-v01@xxxxxxxxxxx<mailto:ssh-rsa-cert-v01@xxxxxxxxxxx> -oCASignatureAlgorithms=ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,ssh-ed25519,rsa-sha2-256,rsa-sha2-512,ssh-rsa
            ├─crond.service
            │ └─2425 /usr/sbin/crond -n
            ├─NetworkManager.service
            │ └─2325 /usr/sbin/NetworkManager --no-daemon
            ├─rsyslog.service
            │ └─2821 /usr/sbin/rsyslogd -n
            ├─sssd.service
            │ ├─2317 /usr/sbin/sssd -i --logger=files
            │ ├─2392 /usr/libexec/sssd/sssd_be --domain implicit_files --uid 0 --gid 0 --logger=files
            │ └─2411 /usr/libexec/sssd/sssd_nss --uid 0 --gid 0 --logger=files
            ├─system-ceph\x2d4a29e724\x2dc4a6\x2d11eb\x2db14a\x2d5c838f8013a5.slice
            │ ├─ceph-4a29e724-c4a6-11eb-b14a-5c838f8013a5@osd.21.service<mailto:ceph-4a29e724-c4a6-11eb-b14a-5c838f8013a5@osd.21.service>
            │ │ ├─3026 /bin/bash /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/osd.21/unit.run
            │ │ └─5491 /bin/docker run --rm --ipc=host --stop-signal=SIGTERM --net=host --entrypoint /usr/bin/ceph-osd --privileged --group-add=disk --init --name ceph-4a29e724-c4a6-11eb-b14a-5c838f8013a5-osd.21 -e CONTAINER_IMAGE=docker.io/ceph/ceph@sha256:829ebf54704f2d827de00913b171e5da741aad9b53c1f35ad59251524790eceb<http://docker.io/ceph/ceph@sha256:829ebf54704f2d827de00913b171e5da741aad9b53c1f35ad59251524790eceb> -e NODE_NAME=cxcto-c240-j27-04.cisco.com<http://cxcto-c240-j27-04.cisco.com/> -e CEPH_USE_RANDOM_NONCE=1 -e TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728 -v /var/run/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5:/var/run/ceph:z -v /var/log/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5:/var/log/ceph:z -v /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/crash:/var/lib/ceph/crash:z -v /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/osd.21:/var/lib/ceph/osd/ceph-21:z -v /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/osd.21/config:/etc/ceph/ceph.conf:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/selinux:/sys/fs/selinux:ro docker.io/ceph/ceph@sha256:829ebf54704f2d827de00913b171e5da741aad9b53c1f35ad59251524790eceb<http://docker.io/ceph/ceph@sha256:829ebf54704f2d827de00913b171e5da741aad9b53c1f35ad59251524790eceb> -n osd.21 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug
            │ ├─ceph-4a29e724-c4a6-11eb-b14a-5c838f8013a5@osd.42.service<mailto:ceph-4a29e724-c4a6-11eb-b14a-5c838f8013a5@osd.42.service>
            │ │ ├─3041 /bin/bash /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/osd.42/unit.run
            │ │ └─6969 /bin/docker run --rm --ipc=host --stop-signal=SIGTERM --net=host --entrypoint /usr/bin/ceph-osd --privileged --group-add=disk --init --name ceph-4a29e724-c4a6-11eb-b14a-5c838f8013a5-osd.42 -e CONTAINER_IMAGE=docker.io/ceph/ceph@sha256:829ebf54704f2d827de00913b171e5da741aad9b53c1f35ad59251524790eceb<http://docker.io/ceph/ceph@sha256:829ebf54704f2d827de00913b171e5da741aad9b53c1f35ad59251524790eceb> -e NODE_NAME=cxcto-c240-j27-04.cisco.com<http://cxcto-c240-j27-04.cisco.com/> -e CEPH_USE_RANDOM_NONCE=1 -e TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728 -v /var/run/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5:/var/run/ceph:z -v /var/log/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5:/var/log/ceph:z -v /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/crash:/var/lib/ceph/crash:z -v /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/osd.42:/var/lib/ceph/osd/ceph-42:z -v /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/osd.42/config:/etc/ceph/ceph.conf:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/selinux:/sys/fs/selinux:ro docker.io/ceph/ceph@sha256:829ebf54704f2d827de00913b171e5da741aad9b53c1f35ad59251524790eceb<http://docker.io/ceph/ceph@sha256:829ebf54704f2d827de00913b171e5da741aad9b53c1f35ad59251524790eceb> -n osd.42 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug
            │ ├─ceph-4a29e724-c4a6-11eb-b14a-5c838f8013a5@osd.172.service<mailto:ceph-4a29e724-c4a6-11eb-b14a-5c838f8013a5@osd.172.service>
            │ │ ├─3020 /bin/bash /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/osd.172/unit.run
            │ │ └─5485 /bin/docker run --rm --ipc=host --stop-signal=SIGTERM --net=host --entrypoint /usr/bin/ceph-osd --privileged --group-add=disk --init --name ceph-4a29e724-c4a6-11eb-b14a-5c838f8013a5-osd.172 -e CONTAINER_IMAGE=docker.io/ceph/ceph@sha256:829ebf54704f2d827de00913b171e5da741aad9b53c1f35ad59251524790eceb<http://docker.io/ceph/ceph@sha256:829ebf54704f2d827de00913b171e5da741aad9b53c1f35ad59251524790eceb> -e NODE_NAME=cxcto-c240-j27-04.cisco.com<http://cxcto-c240-j27-04.cisco.com/> -e CEPH_USE_RANDOM_NONCE=1 -e TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728 -v /var/run/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5:/var/run/ceph:z -v /var/log/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5:/var/log/ceph:z -v /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/crash:/var/lib/ceph/crash:z -v /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/osd.172:/var/lib/ceph/osd/ceph-172:z -v /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/osd.172/config:/etc/ceph/ceph.conf:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/selinux:/sys/fs/selinux:ro docker.io/ceph/ceph@sha256:829ebf54704f2d827de00913b171e5da741aad9b53c1f35ad59251524790eceb<http://docker.io/ceph/ceph@sha256:829ebf54704f2d827de00913b171e5da741aad9b53c1f35ad59251524790eceb> -n osd.172 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug
            │ ├─ceph-4a29e724-c4a6-11eb-b14a-5c838f8013a5@osd.71.service<mailto:ceph-4a29e724-c4a6-11eb-b14a-5c838f8013a5@osd.71.service>
            │ │ ├─3072 /bin/bash /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/osd.71/unit.run
            │ │ └─6822 /bin/docker run --rm --ipc=host --stop-signal=SIGTERM --net=host --entrypoint /usr/bin/ceph-osd --privileged --group-add=disk --init --name ceph-4a29e724-c4a6-11eb-b14a-5c838f8013a5-osd.71 -e CONTAINER_IMAGE=docker.io/ceph/ceph@sha256:829ebf54704f2d827de00913b171e5da741aad9b53c1f35ad59251524790eceb<http://docker.io/ceph/ceph@sha256:829ebf54704f2d827de00913b171e5da741aad9b53c1f35ad59251524790eceb> -e NODE_NAME=cxcto-c240-j27-04.cisco.com<http://cxcto-c240-j27-04.cisco.com/> -e CEPH_USE_RANDOM_NONCE=1 -e TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728 -v /var/run/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5:/var/run/ceph:z -v /var/log/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5:/var/log/ceph:z -v /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/crash:/var/lib/ceph/crash:z -v /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/osd.71:/var/lib/ceph/osd/ceph-71:z -v /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/osd.71/config:/etc/ceph/ceph.conf:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/selinux:/sys/fs/selinux:ro docker.io/ceph/ceph@sha256:829ebf54704f2d827de00913b171e5da741aad9b53c1f35ad59251524790eceb<http://docker.io/ceph/ceph@sha256:829ebf54704f2d827de00913b171e5da741aad9b53c1f35ad59251524790eceb> -n osd.71 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug
            │ ├─ceph-4a29e724-c4a6-11eb-b14a-5c838f8013a5@osd.212.service<mailto:ceph-4a29e724-c4a6-11eb-b14a-5c838f8013a5@osd.212.service>
            │ │ ├─3035 /bin/bash /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/osd.212/unit.run
            │ │ └─6799 /bin/docker run --rm --ipc=host --stop-signal=SIGTERM --net=host --entrypoint /usr/bin/ceph-osd --privileged --group-add=disk --init --name ceph-4a29e724-c4a6-11eb-b14a-5c838f8013a5-osd.212 -e CONTAINER_IMAGE=docker.io/ceph/ceph@sha256:829ebf54704f2d827de00913b171e5da741aad9b53c1f35ad59251524790eceb<http://docker.io/ceph/ceph@sha256:829ebf54704f2d827de00913b171e5da741aad9b53c1f35ad59251524790eceb> -e NODE_NAME=cxcto-c240-j27-04.cisco.com<http://cxcto-c240-j27-04.cisco.com/> -e CEPH_USE_RANDOM_NONCE=1 -e TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728 -v /var/run/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5:/var/run/ceph:z -v /var/log/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5:/var/log/ceph:z -v /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/crash:/var/lib/ceph/crash:z -v /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/osd.212:/var/lib/ceph/osd/ceph-212:z -v /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/osd.212/config:/etc/ceph/ceph.conf:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/selinux:/sys/fs/selinux:ro docker.io/ceph/ceph@sha256:829ebf54704f2d827de00913b171e5da741aad9b53c1f35ad59251524790eceb<http://docker.io/ceph/ceph@sha256:829ebf54704f2d827de00913b171e5da741aad9b53c1f35ad59251524790eceb> -n osd.212 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug
            │ ├─ceph-4a29e724-c4a6-11eb-b14a-5c838f8013a5@osd.132.service<mailto:ceph-4a29e724-c4a6-11eb-b14a-5c838f8013a5@osd.132.service>
            │ │ ├─3043 /bin/bash /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/osd.132/unit.run
            │ │ └─5389 /bin/docker run --rm --ipc=host --stop-signal=SIGTERM --net=host --entrypoint /usr/bin/ceph-osd --privileged --group-add=disk --init --name ceph-4a29e724-c4a6-11eb-b14a-5c838f8013a5-osd.132 -e CONTAINER_IMAGE=docker.io/ceph/ceph@sha256:829ebf54704f2d827de00913b171e5da741aad9b53c1f35ad59251524790eceb<http://docker.io/ceph/ceph@sha256:829ebf54704f2d827de00913b171e5da741aad9b53c1f35ad59251524790eceb> -e NODE_NAME=cxcto-c240-j27-04.cisco.com<http://cxcto-c240-j27-04.cisco.com/> -e CEPH_USE_RANDOM_NONCE=1 -e TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728 -v /var/run/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5:/var/run/ceph:z -v /var/log/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5:/var/log/ceph:z -v /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/crash:/var/lib/ceph/crash:z -v /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/osd.132:/var/lib/ceph/osd/ceph-132:z -v /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/osd.132/config:/etc/ceph/ceph.conf:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/selinux:/sys/fs/selinux:ro docker.io/ceph/ceph@sha256:829ebf54704f2d827de00913b171e5da741aad9b53c1f35ad59251524790eceb<http://docker.io/ceph/ceph@sha256:829ebf54704f2d827de00913b171e5da741aad9b53c1f35ad59251524790eceb> -n osd.132 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug
            │ ├─ceph-4a29e724-c4a6-11eb-b14a-5c838f8013a5@node-exporter.cxcto-c240-j27-04.service<mailto:ceph-4a29e724-c4a6-11eb-b14a-5c838f8013a5@node-exporter.cxcto-c240-j27-04.service>
            │ │ ├─3059 /bin/bash /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/node-exporter.cxcto-c240-j27-04/unit.run
            │ │ └─3353 /bin/docker run --rm --ipc=host --stop-signal=SIGTERM --net=host --init --name ceph-4a29e724-c4a6-11eb-b14a-5c838f8013a5-node-exporter.cxcto-c240-j27-04 --user 65534 -e CONTAINER_IMAGE=docker.io/prom/node-exporter:v0.18.1<http://docker.io/prom/node-exporter:v0.18.1> -e NODE_NAME=cxcto-c240-j27-04.cisco.com<http://cxcto-c240-j27-04.cisco.com/> -e CEPH_USE_RANDOM_NONCE=1 -e TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728 -v /proc:/host/proc:ro -v /sys:/host/sys:ro -v /:/rootfs:ro docker.io/prom/node-exporter:v0.18.1<http://docker.io/prom/node-exporter:v0.18.1> --no-collector.timex --web.listen-address=:9100
            │ ├─ceph-4a29e724-c4a6-11eb-b14a-5c838f8013a5@osd.182.service<mailto:ceph-4a29e724-c4a6-11eb-b14a-5c838f8013a5@osd.182.service>
            │ │ ├─3016 /bin/bash /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/osd.182/unit.run
            │ │ └─5579 /bin/docker run --rm --ipc=host --stop-signal=SIGTERM --net=host --entrypoint /usr/bin/ceph-osd --privileged --group-add=disk --init --name ceph-4a29e724-c4a6-11eb-b14a-5c838f8013a5-osd.182 -e CONTAINER_IMAGE=docker.io/ceph/ceph@sha256:829ebf54704f2d827de00913b171e5da741aad9b53c1f35ad59251524790eceb<http://docker.io/ceph/ceph@sha256:829ebf54704f2d827de00913b171e5da741aad9b53c1f35ad59251524790eceb> -e NODE_NAME=cxcto-c240-j27-04.cisco.com<http://cxcto-c240-j27-04.cisco.com/> -e CEPH_USE_RANDOM_NONCE=1 -e TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728 -v /var/run/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5:/var/run/ceph:z -v /var/log/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5:/var/log/ceph:z -v /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/crash:/var/lib/ceph/crash:z -v /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/osd.182:/var/lib/ceph/osd/ceph-182:z -v /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/osd.182/config:/etc/ceph/ceph.conf:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/selinux:/sys/fs/selinux:ro docker.io/ceph/ceph@sha256:829ebf54704f2d827de00913b171e5da741aad9b53c1f35ad59251524790eceb<http://docker.io/ceph/ceph@sha256:829ebf54704f2d827de00913b171e5da741aad9b53c1f35ad59251524790eceb> -n osd.182 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug
            │ ├─ceph-4a29e724-c4a6-11eb-b14a-5c838f8013a5@iscsi.iscsi.cxcto-c240-j27-04.mwcouk.service<mailto:ceph-4a29e724-c4a6-11eb-b14a-5c838f8013a5@iscsi.iscsi.cxcto-c240-j27-04.mwcouk.service>
            │ │ ├─3018 /bin/bash /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/iscsi.iscsi.cxcto-c240-j27-04.mwcouk/unit.run
            │ │ └─3582 /bin/docker run --rm --ipc=host --stop-signal=SIGTERM --net=host --entrypoint /usr/bin/rbd-target-api --privileged --group-add=disk --init --name ceph-4a29e724-c4a6-11eb-b14a-5c838f8013a5-iscsi.iscsi.cxcto-c240-j27-04.mwcouk -e CONTAINER_IMAGE=docker.io/ceph/ceph@sha256:829ebf54704f2d827de00913b171e5da741aad9b53c1f35ad59251524790eceb<http://docker.io/ceph/ceph@sha256:829ebf54704f2d827de00913b171e5da741aad9b53c1f35ad59251524790eceb> -e NODE_NAME=cxcto-c240-j27-04.cisco.com<http://cxcto-c240-j27-04.cisco.com/> -e CEPH_USE_RANDOM_NONCE=1 -e TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728 -v /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/iscsi.iscsi.cxcto-c240-j27-04.mwcouk/config:/etc/ceph/ceph.conf:z -v /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/iscsi.iscsi.cxcto-c240-j27-04.mwcouk/keyring:/etc/ceph/keyring:z -v /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/iscsi.iscsi.cxcto-c240-j27-04.mwcouk/iscsi-gateway.cfg:/etc/ceph/iscsi-gateway.cfg:z -v /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/iscsi.iscsi.cxcto-c240-j27-04.mwcouk/configfs:/sys/kernel/config -v /var/log/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5:/var/log/rbd-target-api:z -v /dev:/dev --mount type=bind,source=/lib/modules,destination=/lib/modules,ro=true docker.io/ceph/ceph@sha256:829ebf54704f2d827de00913b171e5da741aad9b53c1f35ad59251524790eceb<http://docker.io/ceph/ceph@sha256:829ebf54704f2d827de00913b171e5da741aad9b53c1f35ad59251524790eceb>
            │ ├─ceph-4a29e724-c4a6-11eb-b14a-5c838f8013a5@osd.81.service<mailto:ceph-4a29e724-c4a6-11eb-b14a-5c838f8013a5@osd.81.service>
            │ │ ├─3023 /bin/bash /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/osd.81/unit.run
            │ │ └─6968 /bin/docker run --rm --ipc=host --stop-signal=SIGTERM --net=host --entrypoint /usr/bin/ceph-osd --privileged --group-add=disk --init --name ceph-4a29e724-c4a6-11eb-b14a-5c838f8013a5-osd.81 -e CONTAINER_IMAGE=docker.io/ceph/ceph@sha256:829ebf54704f2d827de00913b171e5da741aad9b53c1f35ad59251524790eceb<http://docker.io/ceph/ceph@sha256:829ebf54704f2d827de00913b171e5da741aad9b53c1f35ad59251524790eceb> -e NODE_NAME=cxcto-c240-j27-04.cisco.com<http://cxcto-c240-j27-04.cisco.com/> -e CEPH_USE_RANDOM_NONCE=1 -e TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728 -v /var/run/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5:/var/run/ceph:z -v /var/log/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5:/var/log/ceph:z -v /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/crash:/var/lib/ceph/crash:z -v /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/osd.81:/var/lib/ceph/osd/ceph-81:z -v /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/osd.81/config:/etc/ceph/ceph.conf:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/selinux:/sys/fs/selinux:ro docker.io/ceph/ceph@sha256:829ebf54704f2d827de00913b171e5da741aad9b53c1f35ad59251524790eceb<http://docker.io/ceph/ceph@sha256:829ebf54704f2d827de00913b171e5da741aad9b53c1f35ad59251524790eceb> -n osd.81 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug
            │ ├─ceph-4a29e724-c4a6-11eb-b14a-5c838f8013a5@osd.12.service<mailto:ceph-4a29e724-c4a6-11eb-b14a-5c838f8013a5@osd.12.service>
            │ │ ├─3054 /bin/bash /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/osd.12/unit.run
            │ │ └─5566 /bin/docker run --rm --ipc=host --stop-signal=SIGTERM --net=host --entrypoint /usr/bin/ceph-osd --privileged --group-add=disk --init --name ceph-4a29e724-c4a6-11eb-b14a-5c838f8013a5-osd.12 -e CONTAINER_IMAGE=docker.io/ceph/ceph@sha256:829ebf54704f2d827de00913b171e5da741aad9b53c1f35ad59251524790eceb<http://docker.io/ceph/ceph@sha256:829ebf54704f2d827de00913b171e5da741aad9b53c1f35ad59251524790eceb> -e NODE_NAME=cxcto-c240-j27-04.cisco.com<http://cxcto-c240-j27-04.cisco.com/> -e CEPH_USE_RANDOM_NONCE=1 -e TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728 -v /var/run/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5:/var/run/ceph:z -v /var/log/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5:/var/log/ceph:z -v /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/crash:/var/lib/ceph/crash:z -v /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/osd.12:/var/lib/ceph/osd/ceph-12:z -v /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/osd.12/config:/etc/ceph/ceph.conf:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/selinux:/sys/fs/selinux:ro docker.io/ceph/ceph@sha256:829ebf54704f2d827de00913b171e5da741aad9b53c1f35ad59251524790eceb<http://docker.io/ceph/ceph@sha256:829ebf54704f2d827de00913b171e5da741aad9b53c1f35ad59251524790eceb> -n osd.12 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug
            │ ├─ceph-4a29e724-c4a6-11eb-b14a-5c838f8013a5@osd.100.service<mailto:ceph-4a29e724-c4a6-11eb-b14a-5c838f8013a5@osd.100.service>
            │ │ ├─3045 /bin/bash /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/osd.100/unit.run
            │ │ └─5507 /bin/docker run --rm --ipc=host --stop-signal=SIGTERM --net=host --entrypoint /usr/bin/ceph-osd --privileged --group-add=disk --init --name ceph-4a29e724-c4a6-11eb-b14a-5c838f8013a5-osd.100 -e CONTAINER_IMAGE=docker.io/ceph/ceph@sha256:829ebf54704f2d827de00913b171e5da741aad9b53c1f35ad59251524790eceb<http://docker.io/ceph/ceph@sha256:829ebf54704f2d827de00913b171e5da741aad9b53c1f35ad59251524790eceb> -e NODE_NAME=cxcto-c240-j27-04.cisco.com<http://cxcto-c240-j27-04.cisco.com/> -e CEPH_USE_RANDOM_NONCE=1 -e TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728 -v /var/run/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5:/var/run/ceph:z -v /var/log/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5:/var/log/ceph:z -v /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/crash:/var/lib/ceph/crash:z -v /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/osd.100:/var/lib/ceph/osd/ceph-100:z -v /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/osd.100/config:/etc/ceph/ceph.conf:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/selinux:/sys/fs/selinux:ro docker.io/ceph/ceph@sha256:829ebf54704f2d827de00913b171e5da741aad9b53c1f35ad59251524790eceb<http://docker.io/ceph/ceph@sha256:829ebf54704f2d827de00913b171e5da741aad9b53c1f35ad59251524790eceb> -n osd.100 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug
            │ ├─ceph-4a29e724-c4a6-11eb-b14a-5c838f8013a5@osd.121.service<mailto:ceph-4a29e724-c4a6-11eb-b14a-5c838f8013a5@osd.121.service>
            │ │ ├─3070 /bin/bash /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/osd.121/unit.run
            │ │ └─6788 /bin/docker run --rm --ipc=host --stop-signal=SIGTERM --net=host --entrypoint /usr/bin/ceph-osd --privileged --group-add=disk --init --name ceph-4a29e724-c4a6-11eb-b14a-5c838f8013a5-osd.121 -e CONTAINER_IMAGE=docker.io/ceph/ceph@sha256:829ebf54704f2d827de00913b171e5da741aad9b53c1f35ad59251524790eceb<http://docker.io/ceph/ceph@sha256:829ebf54704f2d827de00913b171e5da741aad9b53c1f35ad59251524790eceb> -e NODE_NAME=cxcto-c240-j27-04.cisco.com<http://cxcto-c240-j27-04.cisco.com/> -e CEPH_USE_RANDOM_NONCE=1 -e TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728 -v /var/run/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5:/var/run/ceph:z -v /var/log/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5:/var/log/ceph:z -v /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/crash:/var/lib/ceph/crash:z -v /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/osd.121:/var/lib/ceph/osd/ceph-121:z -v /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/osd.121/config:/etc/ceph/ceph.conf:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/selinux:/sys/fs/selinux:ro docker.io/ceph/ceph@sha256:829ebf54704f2d827de00913b171e5da741aad9b53c1f35ad59251524790eceb<http://docker.io/ceph/ceph@sha256:829ebf54704f2d827de00913b171e5da741aad9b53c1f35ad59251524790eceb> -n osd.121 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug
            │ ├─ceph-4a29e724-c4a6-11eb-b14a-5c838f8013a5@osd.142.service<mailto:ceph-4a29e724-c4a6-11eb-b14a-5c838f8013a5@osd.142.service>
            │ │ ├─3081 /bin/bash /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/osd.142/unit.run
            │ │ └─5200 /bin/docker run --rm --ipc=host --stop-signal=SIGTERM --net=host --entrypoint /usr/bin/ceph-osd --privileged --group-add=disk --init --name ceph-4a29e724-c4a6-11eb-b14a-5c838f8013a5-osd.142 -e CONTAINER_IMAGE=docker.io/ceph/ceph@sha256:829ebf54704f2d827de00913b171e5da741aad9b53c1f35ad59251524790eceb<http://docker.io/ceph/ceph@sha256:829ebf54704f2d827de00913b171e5da741aad9b53c1f35ad59251524790eceb> -e NODE_NAME=cxcto-c240-j27-04.cisco.com<http://cxcto-c240-j27-04.cisco.com/> -e CEPH_USE_RANDOM_NONCE=1 -e TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728 -v /var/run/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5:/var/run/ceph:z -v /var/log/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5:/var/log/ceph:z -v /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/crash:/var/lib/ceph/crash:z -v /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/osd.142:/var/lib/ceph/osd/ceph-142:z -v /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/osd.142/config:/etc/ceph/ceph.conf:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/selinux:/sys/fs/selinux:ro docker.io/ceph/ceph@sha256:829ebf54704f2d827de00913b171e5da741aad9b53c1f35ad59251524790eceb<http://docker.io/ceph/ceph@sha256:829ebf54704f2d827de00913b171e5da741aad9b53c1f35ad59251524790eceb> -n osd.142 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug
            │ ├─ceph-4a29e724-c4a6-11eb-b14a-5c838f8013a5@osd.192.service<mailto:ceph-4a29e724-c4a6-11eb-b14a-5c838f8013a5@osd.192.service>
            │ │ ├─3029 /bin/bash /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/osd.192/unit.run
            │ │ └─5244 /bin/docker run --rm --ipc=host --stop-signal=SIGTERM --net=host --entrypoint /usr/bin/ceph-osd --privileged --group-add=disk --init --name ceph-4a29e724-c4a6-11eb-b14a-5c838f8013a5-osd.192 -e CONTAINER_IMAGE=docker.io/ceph/ceph@sha256:829ebf54704f2d827de00913b171e5da741aad9b53c1f35ad59251524790eceb<http://docker.io/ceph/ceph@sha256:829ebf54704f2d827de00913b171e5da741aad9b53c1f35ad59251524790eceb> -e NODE_NAME=cxcto-c240-j27-04.cisco.com<http://cxcto-c240-j27-04.cisco.com/> -e CEPH_USE_RANDOM_NONCE=1 -e TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728 -v /var/run/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5:/var/run/ceph:z -v /var/log/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5:/var/log/ceph:z -v /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/crash:/var/lib/ceph/crash:z -v /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/osd.192:/var/lib/ceph/osd/ceph-192:z -v /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/osd.192/config:/etc/ceph/ceph.conf:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/selinux:/sys/fs/selinux:ro docker.io/ceph/ceph@sha256:829ebf54704f2d827de00913b171e5da741aad9b53c1f35ad59251524790eceb<http://docker.io/ceph/ceph@sha256:829ebf54704f2d827de00913b171e5da741aad9b53c1f35ad59251524790eceb> -n osd.192 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug
            │ ├─ceph-4a29e724-c4a6-11eb-b14a-5c838f8013a5@osd.91.service<mailto:ceph-4a29e724-c4a6-11eb-b14a-5c838f8013a5@osd.91.service>
            │ │ ├─3078 /bin/bash /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/osd.91/unit.run
            │ │ └─6823 /bin/docker run --rm --ipc=host --stop-signal=SIGTERM --net=host --entrypoint /usr/bin/ceph-osd --privileged --group-add=disk --init --name ceph-4a29e724-c4a6-11eb-b14a-5c838f8013a5-osd.91 -e CONTAINER_IMAGE=docker.io/ceph/ceph@sha256:829ebf54704f2d827de00913b171e5da741aad9b53c1f35ad59251524790eceb<http://docker.io/ceph/ceph@sha256:829ebf54704f2d827de00913b171e5da741aad9b53c1f35ad59251524790eceb> -e NODE_NAME=cxcto-c240-j27-04.cisco.com<http://cxcto-c240-j27-04.cisco.com/> -e CEPH_USE_RANDOM_NONCE=1 -e TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728 -v /var/run/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5:/var/run/ceph:z -v /var/log/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5:/var/log/ceph:z -v /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/crash:/var/lib/ceph/crash:z -v /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/osd.91:/var/lib/ceph/osd/ceph-91:z -v /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/osd.91/config:/etc/ceph/ceph.conf:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/selinux:/sys/fs/selinux:ro docker.io/ceph/ceph@sha256:829ebf54704f2d827de00913b171e5da741aad9b53c1f35ad59251524790eceb<http://docker.io/ceph/ceph@sha256:829ebf54704f2d827de00913b171e5da741aad9b53c1f35ad59251524790eceb> -n osd.91 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug
            │ ├─ceph-4a29e724-c4a6-11eb-b14a-5c838f8013a5@osd.110.service<mailto:ceph-4a29e724-c4a6-11eb-b14a-5c838f8013a5@osd.110.service>
            │ │ ├─3017 /bin/bash /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/osd.110/unit.run
            │ │ └─6901 /bin/docker run --rm --ipc=host --stop-signal=SIGTERM --net=host --entrypoint /usr/bin/ceph-osd --privileged --group-add=disk --init --name ceph-4a29e724-c4a6-11eb-b14a-5c838f8013a5-osd.110 -e CONTAINER_IMAGE=docker.io/ceph/ceph@sha256:829ebf54704f2d827de00913b171e5da741aad9b53c1f35ad59251524790eceb<http://docker.io/ceph/ceph@sha256:829ebf54704f2d827de00913b171e5da741aad9b53c1f35ad59251524790eceb> -e NODE_NAME=cxcto-c240-j27-04.cisco.com<http://cxcto-c240-j27-04.cisco.com/> -e CEPH_USE_RANDOM_NONCE=1 -e TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728 -v /var/run/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5:/var/run/ceph:z -v /var/log/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5:/var/log/ceph:z -v /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/crash:/var/lib/ceph/crash:z -v /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/osd.110:/var/lib/ceph/osd/ceph-110:z -v /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/osd.110/config:/etc/ceph/ceph.conf:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/selinux:/sys/fs/selinux:ro docker.io/ceph/ceph@sha256:829ebf54704f2d827de00913b171e5da741aad9b53c1f35ad59251524790eceb<http://docker.io/ceph/ceph@sha256:829ebf54704f2d827de00913b171e5da741aad9b53c1f35ad59251524790eceb> -n osd.110 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug
            │ ├─ceph-4a29e724-c4a6-11eb-b14a-5c838f8013a5@osd.152.service<mailto:ceph-4a29e724-c4a6-11eb-b14a-5c838f8013a5@osd.152.service>
            │ │ ├─3037 /bin/bash /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/osd.152/unit.run
            │ │ └─6853 /bin/docker run --rm --ipc=host --stop-signal=SIGTERM --net=host --entrypoint /usr/bin/ceph-osd --privileged --group-add=disk --init --name ceph-4a29e724-c4a6-11eb-b14a-5c838f8013a5-osd.152 -e CONTAINER_IMAGE=docker.io/ceph/ceph@sha256:829ebf54704f2d827de00913b171e5da741aad9b53c1f35ad59251524790eceb<http://docker.io/ceph/ceph@sha256:829ebf54704f2d827de00913b171e5da741aad9b53c1f35ad59251524790eceb> -e NODE_NAME=cxcto-c240-j27-04.cisco.com<http://cxcto-c240-j27-04.cisco.com/> -e CEPH_USE_RANDOM_NONCE=1 -e TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728 -v /var/run/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5:/var/run/ceph:z -v /var/log/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5:/var/log/ceph:z -v /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/crash:/var/lib/ceph/crash:z -v /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/osd.152:/var/lib/ceph/osd/ceph-152:z -v /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/osd.152/config:/etc/ceph/ceph.conf:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/selinux:/sys/fs/selinux:ro docker.io/ceph/ceph@sha256:829ebf54704f2d827de00913b171e5da741aad9b53c1f35ad59251524790eceb<http://docker.io/ceph/ceph@sha256:829ebf54704f2d827de00913b171e5da741aad9b53c1f35ad59251524790eceb> -n osd.152 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug
            │ ├─ceph-4a29e724-c4a6-11eb-b14a-5c838f8013a5@crash.cxcto-c240-j27-04.service<mailto:ceph-4a29e724-c4a6-11eb-b14a-5c838f8013a5@crash.cxcto-c240-j27-04.service>
            │ │ ├─19060 /bin/bash /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/crash.cxcto-c240-j27-04/unit.run
            │ │ └─19091 /bin/docker run --rm --ipc=host --stop-signal=SIGTERM --net=host --entrypoint /usr/bin/ceph-crash --init --name ceph-4a29e724-c4a6-11eb-b14a-5c838f8013a5-crash.cxcto-c240-j27-04 -e CONTAINER_IMAGE=docker.io/ceph/ceph@sha256:829ebf54704f2d827de00913b171e5da741aad9b53c1f35ad59251524790eceb<http://docker.io/ceph/ceph@sha256:829ebf54704f2d827de00913b171e5da741aad9b53c1f35ad59251524790eceb> -e NODE_NAME=cxcto-c240-j27-04.cisco.com<http://cxcto-c240-j27-04.cisco.com/> -e CEPH_USE_RANDOM_NONCE=1 -e TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728 -v /var/run/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5:/var/run/ceph:z -v /var/log/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5:/var/log/ceph:z -v /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/crash:/var/lib/ceph/crash:z -v /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/crash.cxcto-c240-j27-04/config:/etc/ceph/ceph.conf:z -v /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/crash.cxcto-c240-j27-04/keyring:/etc/ceph/ceph.client.crash.cxcto-c240-j27-04.keyring docker.io/ceph/ceph@sha256:829ebf54704f2d827de00913b171e5da741aad9b53c1f35ad59251524790eceb<http://docker.io/ceph/ceph@sha256:829ebf54704f2d827de00913b171e5da741aad9b53c1f35ad59251524790eceb> -n client.crash.cxcto-c240-j27-04
            │ ├─ceph-4a29e724-c4a6-11eb-b14a-5c838f8013a5@osd.2.service<mailto:ceph-4a29e724-c4a6-11eb-b14a-5c838f8013a5@osd.2.service>
            │ │ ├─3050 /bin/bash /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/osd.2/unit.run
            │ │ └─6785 /bin/docker run --rm --ipc=host --stop-signal=SIGTERM --net=host --entrypoint /usr/bin/ceph-osd --privileged --group-add=disk --init --name ceph-4a29e724-c4a6-11eb-b14a-5c838f8013a5-osd.2 -e CONTAINER_IMAGE=docker.io/ceph/ceph@sha256:829ebf54704f2d827de00913b171e5da741aad9b53c1f35ad59251524790eceb<http://docker.io/ceph/ceph@sha256:829ebf54704f2d827de00913b171e5da741aad9b53c1f35ad59251524790eceb> -e NODE_NAME=cxcto-c240-j27-04.cisco.com<http://cxcto-c240-j27-04.cisco.com/> -e CEPH_USE_RANDOM_NONCE=1 -e TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728 -v /var/run/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5:/var/run/ceph:z -v /var/log/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5:/var/log/ceph:z -v /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/crash:/var/lib/ceph/crash:z -v /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/osd.2:/var/lib/ceph/osd/ceph-2:z -v /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/osd.2/config:/etc/ceph/ceph.conf:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/selinux:/sys/fs/selinux:ro docker.io/ceph/ceph@sha256:829ebf54704f2d827de00913b171e5da741aad9b53c1f35ad59251524790eceb<http://docker.io/ceph/ceph@sha256:829ebf54704f2d827de00913b171e5da741aad9b53c1f35ad59251524790eceb> -n osd.2 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug
            │ ├─ceph-4a29e724-c4a6-11eb-b14a-5c838f8013a5@osd.32.service<mailto:ceph-4a29e724-c4a6-11eb-b14a-5c838f8013a5@osd.32.service>
            │ │ ├─3076 /bin/bash /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/osd.32/unit.run
            │ │ └─5395 /bin/docker run --rm --ipc=host --stop-signal=SIGTERM --net=host --entrypoint /usr/bin/ceph-osd --privileged --group-add=disk --init --name ceph-4a29e724-c4a6-11eb-b14a-5c838f8013a5-osd.32 -e CONTAINER_IMAGE=docker.io/ceph/ceph@sha256:829ebf54704f2d827de00913b171e5da741aad9b53c1f35ad59251524790eceb<http://docker.io/ceph/ceph@sha256:829ebf54704f2d827de00913b171e5da741aad9b53c1f35ad59251524790eceb> -e NODE_NAME=cxcto-c240-j27-04.cisco.com<http://cxcto-c240-j27-04.cisco.com/> -e CEPH_USE_RANDOM_NONCE=1 -e TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728 -v /var/run/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5:/var/run/ceph:z -v /var/log/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5:/var/log/ceph:z -v /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/crash:/var/lib/ceph/crash:z -v /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/osd.32:/var/lib/ceph/osd/ceph-32:z -v /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/osd.32/config:/etc/ceph/ceph.conf:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/selinux:/sys/fs/selinux:ro docker.io/ceph/ceph@sha256:829ebf54704f2d827de00913b171e5da741aad9b53c1f35ad59251524790eceb<http://docker.io/ceph/ceph@sha256:829ebf54704f2d827de00913b171e5da741aad9b53c1f35ad59251524790eceb> -n osd.32 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug
            │ ├─ceph-4a29e724-c4a6-11eb-b14a-5c838f8013a5@osd.53.service<mailto:ceph-4a29e724-c4a6-11eb-b14a-5c838f8013a5@osd.53.service>
            │ │ ├─3067 /bin/bash /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/osd.53/unit.run
            │ │ └─6800 /bin/docker run --rm --ipc=host --stop-signal=SIGTERM --net=host --entrypoint /usr/bin/ceph-osd --privileged --group-add=disk --init --name ceph-4a29e724-c4a6-11eb-b14a-5c838f8013a5-osd.53 -e CONTAINER_IMAGE=docker.io/ceph/ceph@sha256:829ebf54704f2d827de00913b171e5da741aad9b53c1f35ad59251524790eceb<http://docker.io/ceph/ceph@sha256:829ebf54704f2d827de00913b171e5da741aad9b53c1f35ad59251524790eceb> -e NODE_NAME=cxcto-c240-j27-04.cisco.com<http://cxcto-c240-j27-04.cisco.com/> -e CEPH_USE_RANDOM_NONCE=1 -e TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728 -v /var/run/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5:/var/run/ceph:z -v /var/log/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5:/var/log/ceph:z -v /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/crash:/var/lib/ceph/crash:z -v /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/osd.53:/var/lib/ceph/osd/ceph-53:z -v /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/osd.53/config:/etc/ceph/ceph.conf:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/selinux:/sys/fs/selinux:ro docker.io/ceph/ceph@sha256:829ebf54704f2d827de00913b171e5da741aad9b53c1f35ad59251524790eceb<http://docker.io/ceph/ceph@sha256:829ebf54704f2d827de00913b171e5da741aad9b53c1f35ad59251524790eceb> -n osd.53 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug
            │ ├─ceph-4a29e724-c4a6-11eb-b14a-5c838f8013a5@osd.162.service<mailto:ceph-4a29e724-c4a6-11eb-b14a-5c838f8013a5@osd.162.service>
            │ │ ├─3042 /bin/bash /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/osd.162/unit.run
            │ │ └─5221 /bin/docker run --rm --ipc=host --stop-signal=SIGTERM --net=host --entrypoint /usr/bin/ceph-osd --privileged --group-add=disk --init --name ceph-4a29e724-c4a6-11eb-b14a-5c838f8013a5-osd.162 -e CONTAINER_IMAGE=docker.io/ceph/ceph@sha256:829ebf54704f2d827de00913b171e5da741aad9b53c1f35ad59251524790eceb<http://docker.io/ceph/ceph@sha256:829ebf54704f2d827de00913b171e5da741aad9b53c1f35ad59251524790eceb> -e NODE_NAME=cxcto-c240-j27-04.cisco.com<http://cxcto-c240-j27-04.cisco.com/> -e CEPH_USE_RANDOM_NONCE=1 -e TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728 -v /var/run/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5:/var/run/ceph:z -v /var/log/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5:/var/log/ceph:z -v /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/crash:/var/lib/ceph/crash:z -v /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/osd.162:/var/lib/ceph/osd/ceph-162:z -v /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/osd.162/config:/etc/ceph/ceph.conf:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/selinux:/sys/fs/selinux:ro docker.io/ceph/ceph@sha256:829ebf54704f2d827de00913b171e5da741aad9b53c1f35ad59251524790eceb<http://docker.io/ceph/ceph@sha256:829ebf54704f2d827de00913b171e5da741aad9b53c1f35ad59251524790eceb> -n osd.162 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug
            │ ├─ceph-4a29e724-c4a6-11eb-b14a-5c838f8013a5@osd.61.service<mailto:ceph-4a29e724-c4a6-11eb-b14a-5c838f8013a5@osd.61.service>
            │ │ ├─3084 /bin/bash /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/osd.61/unit.run
            │ │ └─5580 /bin/docker run --rm --ipc=host --stop-signal=SIGTERM --net=host --entrypoint /usr/bin/ceph-osd --privileged --group-add=disk --init --name ceph-4a29e724-c4a6-11eb-b14a-5c838f8013a5-osd.61 -e CONTAINER_IMAGE=docker.io/ceph/ceph@sha256:829ebf54704f2d827de00913b171e5da741aad9b53c1f35ad59251524790eceb<http://docker.io/ceph/ceph@sha256:829ebf54704f2d827de00913b171e5da741aad9b53c1f35ad59251524790eceb> -e NODE_NAME=cxcto-c240-j27-04.cisco.com<http://cxcto-c240-j27-04.cisco.com/> -e CEPH_USE_RANDOM_NONCE=1 -e TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728 -v /var/run/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5:/var/run/ceph:z -v /var/log/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5:/var/log/ceph:z -v /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/crash:/var/lib/ceph/crash:z -v /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/osd.61:/var/lib/ceph/osd/ceph-61:z -v /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/osd.61/config:/etc/ceph/ceph.conf:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/selinux:/sys/fs/selinux:ro docker.io/ceph/ceph@sha256:829ebf54704f2d827de00913b171e5da741aad9b53c1f35ad59251524790eceb<http://docker.io/ceph/ceph@sha256:829ebf54704f2d827de00913b171e5da741aad9b53c1f35ad59251524790eceb> -n osd.61 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug
            │ └─ceph-4a29e724-c4a6-11eb-b14a-5c838f8013a5@osd.202.service<mailto:ceph-4a29e724-c4a6-11eb-b14a-5c838f8013a5@osd.202.service>
            │   ├─3019 /bin/bash /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/osd.202/unit.run
            │   └─5452 /bin/docker run --rm --ipc=host --stop-signal=SIGTERM --net=host --entrypoint /usr/bin/ceph-osd --privileged --group-add=disk --init --name ceph-4a29e724-c4a6-11eb-b14a-5c838f8013a5-osd.202 -e CONTAINER_IMAGE=docker.io/ceph/ceph@sha256:829ebf54704f2d827de00913b171e5da741aad9b53c1f35ad59251524790eceb<http://docker.io/ceph/ceph@sha256:829ebf54704f2d827de00913b171e5da741aad9b53c1f35ad59251524790eceb> -e NODE_NAME=cxcto-c240-j27-04.cisco.com<http://cxcto-c240-j27-04.cisco.com/> -e CEPH_USE_RANDOM_NONCE=1 -e TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728 -v /var/run/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5:/var/run/ceph:z -v /var/log/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5:/var/log/ceph:z -v /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/crash:/var/lib/ceph/crash:z -v /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/osd.202:/var/lib/ceph/osd/ceph-202:z -v /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/osd.202/config:/etc/ceph/ceph.conf:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /var/lib/ceph/4a29e724-c4a6-11eb-b14a-5c838f8013a5/selinux:/sys/fs/selinux:ro docker.io/ceph/ceph@sha256:829ebf54704f2d827de00913b171e5da741aad9b53c1f35ad59251524790eceb<http://docker.io/ceph/ceph@sha256:829ebf54704f2d827de00913b171e5da741aad9b53c1f35ad59251524790eceb> -n osd.202 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug
            ├─dbus.service
            │ └─2321 /usr/bin/dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd-activation --syslog-only
            ├─system-getty.slice
            │ └─getty@tty1.service<mailto:getty@tty1.service>
            │   └─2432 /sbin/agetty -o -p -- \u --noclear tty1 linux
            └─systemd-logind.service
              └─2421 /usr/lib/systemd/systemd-logind
[root@cxcto-c240-j27-04 ~]#






The service is also not running in a docker ps anymore either:

[root@cxcto-c240-j27-04 ~]# docker ps
CONTAINER ID   IMAGE                        COMMAND                  CREATED        STATUS        PORTS     NAMES
66019a2f595f   ceph/ceph                    "/usr/bin/ceph-crash…"   13 hours ago   Up 13 hours             ceph-4a29e724-c4a6-11eb-b14a-5c838f8013a5-crash.cxcto-c240-j27-04
cfcb00e54695   ceph/ceph                    "/usr/bin/ceph-osd -…"   13 hours ago   Up 13 hours             ceph-4a29e724-c4a6-11eb-b14a-5c838f8013a5-osd.81
c94dc348fd2a   ceph/ceph                    "/usr/bin/ceph-osd -…"   13 hours ago   Up 13 hours             ceph-4a29e724-c4a6-11eb-b14a-5c838f8013a5-osd.42
847855f0b5a6   ceph/ceph                    "/usr/bin/ceph-osd -…"   13 hours ago   Up 13 hours             ceph-4a29e724-c4a6-11eb-b14a-5c838f8013a5-osd.110
c19bd235dfd0   ceph/ceph                    "/usr/bin/ceph-osd -…"   13 hours ago   Up 13 hours             ceph-4a29e724-c4a6-11eb-b14a-5c838f8013a5-osd.152
061d8b9e71bf   ceph/ceph                    "/usr/bin/ceph-osd -…"   13 hours ago   Up 13 hours             ceph-4a29e724-c4a6-11eb-b14a-5c838f8013a5-osd.91
04e8411b50f5   ceph/ceph                    "/usr/bin/ceph-osd -…"   13 hours ago   Up 13 hours             ceph-4a29e724-c4a6-11eb-b14a-5c838f8013a5-osd.71
ea560d464017   ceph/ceph                    "/usr/bin/ceph-osd -…"   13 hours ago   Up 13 hours             ceph-4a29e724-c4a6-11eb-b14a-5c838f8013a5-osd.212
ab34680a337e   ceph/ceph                    "/usr/bin/ceph-osd -…"   13 hours ago   Up 13 hours             ceph-4a29e724-c4a6-11eb-b14a-5c838f8013a5-osd.53
8a4bc8a41f3f   ceph/ceph                    "/usr/bin/ceph-osd -…"   13 hours ago   Up 13 hours             ceph-4a29e724-c4a6-11eb-b14a-5c838f8013a5-osd.121
e999424158da   ceph/ceph                    "/usr/bin/ceph-osd -…"   13 hours ago   Up 13 hours             ceph-4a29e724-c4a6-11eb-b14a-5c838f8013a5-osd.2
aaac1b0c09a9   ceph/ceph                    "/usr/bin/ceph-osd -…"   13 hours ago   Up 13 hours             ceph-4a29e724-c4a6-11eb-b14a-5c838f8013a5-osd.61
477965614a61   ceph/ceph                    "/usr/bin/ceph-osd -…"   13 hours ago   Up 13 hours             ceph-4a29e724-c4a6-11eb-b14a-5c838f8013a5-osd.12
9ebb20b23780   ceph/ceph                    "/usr/bin/ceph-osd -…"   13 hours ago   Up 13 hours             ceph-4a29e724-c4a6-11eb-b14a-5c838f8013a5-osd.182
8283765ab59f   ceph/ceph                    "/usr/bin/ceph-osd -…"   13 hours ago   Up 13 hours             ceph-4a29e724-c4a6-11eb-b14a-5c838f8013a5-osd.100
13f8e85206f1   ceph/ceph                    "/usr/bin/ceph-osd -…"   13 hours ago   Up 13 hours             ceph-4a29e724-c4a6-11eb-b14a-5c838f8013a5-osd.21
4325c4ddb46f   ceph/ceph                    "/usr/bin/ceph-osd -…"   13 hours ago   Up 13 hours             ceph-4a29e724-c4a6-11eb-b14a-5c838f8013a5-osd.172
c1706b422304   ceph/ceph                    "/usr/bin/ceph-osd -…"   13 hours ago   Up 13 hours             ceph-4a29e724-c4a6-11eb-b14a-5c838f8013a5-osd.202
1c794e4dc591   ceph/ceph                    "/usr/bin/ceph-osd -…"   13 hours ago   Up 13 hours             ceph-4a29e724-c4a6-11eb-b14a-5c838f8013a5-osd.32
0676b14c15cf   ceph/ceph                    "/usr/bin/ceph-osd -…"   13 hours ago   Up 13 hours             ceph-4a29e724-c4a6-11eb-b14a-5c838f8013a5-osd.132
f872a81eade4   ceph/ceph                    "/usr/bin/ceph-osd -…"   13 hours ago   Up 13 hours             ceph-4a29e724-c4a6-11eb-b14a-5c838f8013a5-osd.162
3193fec6cab3   ceph/ceph                    "/usr/bin/ceph-osd -…"   13 hours ago   Up 13 hours             ceph-4a29e724-c4a6-11eb-b14a-5c838f8013a5-osd.142
25510a6f503e   ceph/ceph                    "/usr/bin/ceph-osd -…"   13 hours ago   Up 13 hours             ceph-4a29e724-c4a6-11eb-b14a-5c838f8013a5-osd.192
5043c4101899   ceph/ceph                    "/usr/bin/rbd-target…"   13 hours ago   Up 13 hours             ceph-4a29e724-c4a6-11eb-b14a-5c838f8013a5-iscsi.iscsi.cxcto-c240-j27-04.mwcouk
466d742e022b   prom/node-exporter:v0.18.1   "/bin/node_exporter …"   13 hours ago   Up 13 hours             ceph-4a29e724-c4a6-11eb-b14a-5c838f8013a5-node-exporter.cxcto-c240-j27-04


Is there a way to see why the service is dying? Some kind of stack track or other debug? As far as I can tell, the tcmu-runner.log file is only available inside the container, so when it dies, the log is gone as well.

Any ideas are greatly appreciated.

-Paul












On Aug 25, 2021, at 11:24 AM, Paul Giralt (pgiralt) <pgiralt@xxxxxxxxx<mailto:pgiralt@xxxxxxxxx>> wrote:



Does the node hang while shutting down or does it lock up so that you
can't even issue the reboot command?


It hangs when shutting down. I can SSH in and issue commands just fine and it takes the shutdown command and kicks me out, but it appears to never shut down as I can still ping the server until I power-cycle it.


The first place to look at is dmesg and "systemctl status".  cephadm
wraps the services into systemd units so there should be a record of
it terminating there.  If tcmu-runner is indeed crashing, Xiubo (CCed)
might be able to help with debugging.

Thank you for the pointer. I’ll look at this next time it happens and send what I see.

-Paul

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx<mailto:ceph-users@xxxxxxx>
To unsubscribe send an email to ceph-users-leave@xxxxxxx<mailto:ceph-users-leave@xxxxxxx>

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx<mailto:ceph-users@xxxxxxx>
To unsubscribe send an email to ceph-users-leave@xxxxxxx<mailto:ceph-users-leave@xxxxxxx>


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux