Re: cephfs-snapshots causing mds failover, hangs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Aug 20, 2019 at 9:43 PM thoralf schulze <t.schulze@xxxxxxxxxxxx> wrote:
>
> hi there,
>
> we are struggling with the creation of cephfs-snapshots: doing so
> reproducible causes a failover of our metadata servers. afterwards, the
> demoted mds servers won't be available as standby servers and the mds
> daemons on these machines have to be manually restarted. more often than
> we wish, the failover fails altogether, resulting in an unresponsive cephfs.
>


Please enable debug mds (debug_mds=10), and try reproducing it again.

Regards
Yan, Zheng

> this is with mimic 13.2.6 and a single cephfs. we are running 4 mds
> servers with plenty of cpu and ram ressources in a multi-active setup
> with 2 active and 2 standby mds's:
>
> mds: ceph-fs-2/2/2 up
> {0=juju-d0f708-3-lxd-1=up:active,1=juju-d0f708-5-lxd-1=up:active}, 2
> up:standby-replay
>
> is the transition from active to the standby mds servers intended? and
> if not: how can we prevent those?
> we could live with the failover if the ex-active mds's would still be a
> part of the cluster afterwards, but this is not the case. on top of
> that, the failover is not 100% reliable - if it fails, the newly active
> mds's exhibit the same symptoms as the failed ones: they just sit around
> complaining about "MDS internal heartbeat is not healthy!".
>
> strace'ing the mds processes on the ex-active mds shows that they are
> mostly waiting for some futex'es to become available. we also found that
> the issue gets alleviated a bit by rising mds_cache_memory_limit from
> its default of 1gb to 32gb - in this case, the failover has a higher
> chance to succeed.
>
> below are some logs from a successful failover - juju-d0f708-9-lxd-1 and
> juju-d0f708-10-lxd-1 were the active mds's and juju-d0f708-3-lxd-1 and
> juju-d0f708-5-lxd-1 the standbys. sorry for being very verbose, i don't
> want to withhold any information that might be necessary to debug this
> issue … if it helps, i can also provide the logs for
> juju-d0f708-10-lxd-1 and juju-d0f708-5-lxd-1 and the output of ceph
> daemon perf dump on all mds's before and after the issue occurs.
>
> thank you very much & with kind regards,
> t.
>
> --- logs ---
>
> ceph-mgr.log on a mon:
>
> 2019-08-20 09:18:23.642 7f5679639700  0 ms_deliver_dispatch: unhandled
> message 0x5614f2184000 mgrreport(mds.juju-d0f708-9-lxd-1 +0-0 packed
> 1374) v6 from mds.0 172.28.9.20:6800/2237168008
> 2019-08-20 09:18:23.646 7f5684835700  1 mgr finish mon failed to return
> metadata for mds.juju-d0f708-9-lxd-1: (22) Invalid argument
> 2019-08-20 09:18:55.781 7f5679639700  0 ms_deliver_dispatch: unhandled
> message 0x5614f256e700 mgrreport(mds.juju-d0f708-10-lxd-1 +0-0 packed
> 1374) v6 from mds.1 172.28.9.21:6800/2008779502
> 2019-08-20 09:18:55.781 7f5684835700  1 mgr finish mon failed to return
> metadata for mds.juju-d0f708-10-lxd-1: (22) Invalid argument
> 2019-08-20 09:21:26.562 7f5679639700  0 ms_deliver_dispatch: unhandled
> message 0x5614f8780a00 mgrreport(mds.juju-d0f708-10-lxd-1 +0-0 packed 6)
> v6 from mds.? 172.28.9.21:6800/1319885328
> 2019-08-20 09:21:26.562 7f5684835700  1 mgr finish mon failed to return
> metadata for mds.juju-d0f708-10-lxd-1: (22) Invalid argument
> 2019-08-20 09:21:27.558 7f5679639700  0 ms_deliver_dispatch: unhandled
> message 0x5614f5666a00 mgrreport(mds.juju-d0f708-10-lxd-1 +0-0 packed 6)
> v6 from mds.? 172.28.9.21:6800/1319885328
> 2019-08-20 09:21:27.562 7f5684835700  1 mgr finish mon failed to return
> metadata for mds.juju-d0f708-10-lxd-1: (22) Invalid argument
> 2019-08-20 09:21:28.558 7f5679639700  0 ms_deliver_dispatch: unhandled
> message 0x5614f2e3ee00 mgrreport(mds.juju-d0f708-10-lxd-1 +0-0 packed 6)
> v6 from mds.? 172.28.9.21:6800/1319885328
> 2019-08-20 09:21:28.562 7f5684835700  1 mgr finish mon failed to return
> metadata for mds.juju-d0f708-10-lxd-1: (22) Invalid argument
> 2019-08-20 09:21:29.558 7f5679639700  0 ms_deliver_dispatch: unhandled
> message 0x5614efa71880 mgrreport(mds.juju-d0f708-10-lxd-1 +0-0 packed 6)
> v6 from mds.? 172.28.9.21:6800/1319885328
> [… more of these]
>
> ceph-mds.log on juju-d0f708-9-lxd-1 (ex active rank 0):
>
> 2019-08-20 09:17:44.824 7f7d3a138700  5 mds.beacon.juju-d0f708-9-lxd-1
> Sending beacon up:active seq 59
> 2019-08-20 09:17:44.824 7f7d3fb2b700  5 mds.beacon.juju-d0f708-9-lxd-1
> received beacon reply up:active seq 59 rtt 0
> 2019-08-20 09:17:45.020 7f7d3db27700  4 mds.0.server
> handle_client_request client_request(client.89193:898963 getattr
> pAsLsXsFs #0x1000000221e 2019-08-20 09:17:45
> .021386 caller_uid=0, caller_gid=0{}) v2
> [… lots of these …]
> 2019-08-20 09:17:45.092 7f7d36130700  5 mds.0.log _submit_thread
> 109124009784~1190 : EUpdate cap update [metablob 0x1000000004c, 1 dirs]
> 2019-08-20 09:17:45.092 7f7d3db27700  4 mds.0.server
> handle_client_request client_request(client.49306:901053 getattr
> pAsLsXsFs #0x1000000221f 2019-08-20 09:17:45
> .093085 caller_uid=0, caller_gid=0{}) v2
> [… lots of these …]
> 2019-08-20 09:17:45.260 7f7d36130700  5 mds.0.log _submit_thread
> 109124010994~107 : ETableServer snaptable prepare reqid 2 mds.0 tid 98
> version 98 mutation=43 bytes
> 2019-08-20 09:17:45.264 7f7d36130700  5 mds.0.log _submit_thread
> 109124011121~11940 : EUpdate mksnap [metablob 0x1, 1 dirs table_tids=^A,98]
> 2019-08-20 09:17:45.272 7f7d36130700  5 mds.0.log _submit_thread
> 109124023081~64 : ETableServer snaptable commit tid 98 version 99
> 2019-08-20 09:17:45.272 7f7d3db27700  3 mds.0.server
> handle_client_session client_session(request_renewcaps seq 4723) from
> client.89196
> [… lots of these]
> 2019-08-20 09:17:47.556 7f7d3db27700  4 mds.0.server
> handle_client_request client_request(client.49300:968447 getattr
> pAsLsXsFs #0x10000002153 2019-08-20 09:17:47.557632 caller_uid=0,
> caller_gid=0{}) v2
> 2019-08-20 09:17:47.564 7f7d3db27700  4 mds.0.server
> handle_client_request client_request(client.12139:3732538 getattr
> pAsLsXsFs #0x10000002b59 2019-08-20 09:17:47.563937 caller_uid=0,
> caller_gid=0{}) v2
> 2019-08-20 09:17:47.564 7f7d36130700  5 mds.0.log _submit_thread
> 109124038898~1228 : EUpdate cap update [metablob 0x1000000000a, 1 dirs]
> 2019-08-20 09:17:48.824 7f7d3a138700  5 mds.beacon.juju-d0f708-9-lxd-1
> Sending beacon up:active seq 60
> 2019-08-20 09:17:48.824 7f7d3fb2b700  5 mds.beacon.juju-d0f708-9-lxd-1
> received beacon reply up:active seq 60 rtt 0
> 2019-08-20 09:17:52.824 7f7d3a138700  5 mds.beacon.juju-d0f708-9-lxd-1
> Sending beacon up:active seq 61
> 2019-08-20 09:17:52.824 7f7d3fb2b700  5 mds.beacon.juju-d0f708-9-lxd-1
> received beacon reply up:active seq 61 rtt 0
> 2019-08-20 09:17:56.824 7f7d3a138700  5 mds.beacon.juju-d0f708-9-lxd-1
> Sending beacon up:active seq 62
> 2019-08-20 09:17:56.824 7f7d3fb2b700  5 mds.beacon.juju-d0f708-9-lxd-1
> received beacon reply up:active seq 62 rtt 0
> 2019-08-20 09:18:00.824 7f7d3a138700  5 mds.beacon.juju-d0f708-9-lxd-1
> Sending beacon up:active seq 63
> 2019-08-20 09:18:00.824 7f7d3fb2b700  5 mds.beacon.juju-d0f708-9-lxd-1
> received beacon reply up:active seq 63 rtt 0
> 2019-08-20 09:18:04.824 7f7d3a138700  1 heartbeat_map is_healthy
> 'MDSRank' had timed out after 15
> 2019-08-20 09:18:04.824 7f7d3a138700  0 mds.beacon.juju-d0f708-9-lxd-1
> Skipping beacon heartbeat to monitors (last acked 4s ago); MDS internal
> heartbeat is not healthy!
> [… more of these]
>
> ceph-mds.log on juju-d0f708-3-lxd-1 (ex standby rank 0):
>
> 2019-08-20 09:17:57.416 7f6c70964700  5 mds.beacon.juju-d0f708-3-lxd-1
> Sending beacon up:standby-replay seq 34
> 2019-08-20 09:17:57.420 7f6c76357700  5 mds.beacon.juju-d0f708-3-lxd-1
> received beacon reply up:standby-replay seq 34 rtt 0.00399997
> 2019-08-20 09:17:58.168 7f6c71165700  5 mds.0.0 Restarting replay as
> standby-replay
> 2019-08-20 09:17:58.172 7f6c6d95e700  2 mds.0.0 boot_start 2: replaying
> mds log
> 2019-08-20 09:17:58.172 7f6c6d95e700  5 mds.0.0 Finished replaying
> journal as standby-replay
> 2019-08-20 09:17:59.172 7f6c71165700  5 mds.0.0 Restarting replay as
> standby-replay
> [… more of these …]
> 2019-08-20 09:18:17.420 7f6c70964700  5 mds.beacon.juju-d0f708-3-lxd-1
> Sending beacon up:standby-replay seq 39
> 2019-08-20 09:18:17.420 7f6c76357700  5 mds.beacon.juju-d0f708-3-lxd-1
> received beacon reply up:standby-replay seq 39 rtt 0
> 2019-08-20 09:18:18.216 7f6c71165700  5 mds.0.0 Restarting replay as
> standby-replay
> 2019-08-20 09:18:18.220 7f6c6d95e700  2 mds.0.0 boot_start 2: replaying
> mds log
> 2019-08-20 09:18:18.220 7f6c6d95e700  5 mds.0.0 Finished replaying
> journal as standby-replay
> 2019-08-20 09:18:18.756 7f6c74353700  4 mds.0.0 handle_osd_map epoch
> 7445, 0 new blacklist entries
> 2019-08-20 09:18:18.776 7f6c74353700  1 mds.juju-d0f708-3-lxd-1 Updating
> MDS map to version 6084 from mon.0
> 2019-08-20 09:18:18.776 7f6c74353700  1 mds.0.6084 handle_mds_map i am
> now mds.0.6084
> 2019-08-20 09:18:18.776 7f6c74353700  1 mds.0.6084 handle_mds_map state
> change up:standby-replay --> up:replay
> 2019-08-20 09:18:18.776 7f6c74353700  5 mds.beacon.juju-d0f708-3-lxd-1
> set_want_state: up:standby-replay -> up:replay
> 2019-08-20 09:18:19.220 7f6c71165700  5 mds.0.6084 Restarting replay as
> standby-replay
> 2019-08-20 09:18:19.240 7f6c6d95e700  2 mds.0.6084 boot_start 2:
> replaying mds log
> 2019-08-20 09:18:19.240 7f6c6d95e700  5 mds.0.6084 Finished replaying
> journal as standby-replay
> 2019-08-20 09:18:19.240 7f6c6d95e700  1 mds.0.6084
> standby_replay_restart (final takeover pass)
> 2019-08-20 09:18:19.240 7f6c6d95e700  1 mds.0.6084  opening purge_queue
> (async)
> 2019-08-20 09:18:19.240 7f6c6d95e700  4 mds.0.purge_queue open: opening
> 2019-08-20 09:18:19.240 7f6c6d95e700  1 mds.0.6084  opening
> open_file_table (async)
> 2019-08-20 09:18:19.240 7f6c6d95e700  2 mds.0.6084 boot_start 2:
> replaying mds log
> 2019-08-20 09:18:19.240 7f6c6d95e700  2 mds.0.6084 boot_start 2: waiting
> for purge queue recovered
> 2019-08-20 09:18:19.252 7f6c6e960700  4 mds.0.purge_queue operator():
> open complete
> 2019-08-20 09:18:19.252 7f6c6d95e700  1 mds.0.6084 Finished replaying
> journal
> 2019-08-20 09:18:19.252 7f6c6d95e700  1 mds.0.6084 making mds journal
> writeable
> 2019-08-20 09:18:19.252 7f6c6d95e700  2 mds.0.6084 i am not alone,
> moving to state resolve
> 2019-08-20 09:18:19.252 7f6c6d95e700  3 mds.0.6084 request_state up:resolve
> 2019-08-20 09:18:19.252 7f6c6d95e700  5 mds.beacon.juju-d0f708-3-lxd-1
> set_want_state: up:replay -> up:resolve
> 2019-08-20 09:18:19.252 7f6c6d95e700  5 mds.beacon.juju-d0f708-3-lxd-1
> Sending beacon up:resolve seq 40
> 2019-08-20 09:18:19.784 7f6c74353700  1 mds.juju-d0f708-3-lxd-1 Updating
> MDS map to version 6085 from mon.0
> 2019-08-20 09:18:19.784 7f6c74353700  1 mds.0.6084 handle_mds_map i am
> now mds.0.6084
> 2019-08-20 09:18:19.784 7f6c74353700  1 mds.0.6084 handle_mds_map state
> change up:replay --> up:resolve
> 2019-08-20 09:18:19.784 7f6c74353700  1 mds.0.6084 resolve_start
> 2019-08-20 09:18:19.784 7f6c74353700  1 mds.0.6084 reopen_log
> 2019-08-20 09:18:19.784 7f6c74353700  1 mds.0.6084  recovery set is 1
> 2019-08-20 09:18:19.784 7f6c76357700  5 mds.beacon.juju-d0f708-3-lxd-1
> received beacon reply up:resolve seq 40 rtt 0.531996
> 2019-08-20 09:18:19.784 7f6c74353700  5 mds.juju-d0f708-3-lxd-1
> handle_mds_map old map epoch 6085 <= 6085, discarding
> 2019-08-20 09:18:19.788 7f6c74353700  1 mds.0.6084 resolve_done
> 2019-08-20 09:18:19.788 7f6c74353700  3 mds.0.6084 request_state
> up:reconnect
> 2019-08-20 09:18:19.788 7f6c74353700  5 mds.beacon.juju-d0f708-3-lxd-1
> set_want_state: up:resolve -> up:reconnect
> 2019-08-20 09:18:19.788 7f6c74353700  5 mds.beacon.juju-d0f708-3-lxd-1
> Sending beacon up:reconnect seq 41
> 2019-08-20 09:18:20.804 7f6c74353700  1 mds.juju-d0f708-3-lxd-1 Updating
> MDS map to version 6086 from mon.0
> 2019-08-20 09:18:20.804 7f6c74353700  1 mds.0.6084 handle_mds_map i am
> now mds.0.6084
> 2019-08-20 09:18:20.804 7f6c74353700  1 mds.0.6084 handle_mds_map state
> change up:resolve --> up:reconnect
> 2019-08-20 09:18:20.804 7f6c74353700  1 mds.0.6084 reconnect_start
> 2019-08-20 09:18:20.804 7f6c74353700  4 mds.0.6084 reconnect_start:
> killed 0 blacklisted sessions (38 blacklist entries, 70)
> 2019-08-20 09:18:20.804 7f6c74353700  1 mds.0.server reconnect_clients
> -- 70 sessions
> 2019-08-20 09:18:20.804 7f6c76357700  5 mds.beacon.juju-d0f708-3-lxd-1
> received beacon reply up:reconnect seq 41 rtt 1.01599
> 2019-08-20 09:18:20.804 7f6c74353700  3 mds.0.server not active yet, waiting
> 2019-08-20 09:18:20.804 7f6c74353700  0 log_channel(cluster) log [DBG] :
> reconnect by client.89616 130.149.2.137:0/3205297448 after 0
> 2019-08-20 09:18:20.804 7f6c74353700  0 log_channel(cluster) do_log log
> to syslog
> 2019-08-20 09:18:20.812 7f6c74353700  0 log_channel(cluster) log [DBG] :
> reconnect by client.12124 172.28.9.23:0/346444993 after 0.00799994
> 2019-08-20 09:18:20.812 7f6c74353700  0 log_channel(cluster) do_log log
> to syslog
> 2019-08-20 09:18:20.812 7f6c74353700  3 mds.0.server not active yet, waiting
> 2019-08-20 09:18:20.812 7f6c74353700  0 log_channel(cluster) log [DBG] :
> reconnect by client.48511 172.28.9.30:0/3829735889 after 0.00799994
> 2019-08-20 09:18:20.812 7f6c74353700  0 log_channel(cluster) do_log log
> to syslog
> [… more of these …]
> 2019-08-20 09:18:20.832 7f6c74353700  3 mds.0.server not active yet, waiting
> 2019-08-20 09:18:20.832 7f6c74353700  0 log_channel(cluster) log [DBG] :
> reconnect by client.13108 10.175.4.19:0/375032773 after 0.0279998
> 2019-08-20 09:18:20.832 7f6c74353700  0 log_channel(cluster) do_log log
> to syslog
> [… more of these …]
> 2019-08-20 09:18:20.836 7f6c74353700  3 mds.0.server not active yet, waiting
> 2019-08-20 09:18:20.836 7f6c74353700  3 mds.0.server not active yet, waiting
> 2019-08-20 09:18:20.836 7f6c74353700  0 log_channel(cluster) log [DBG] :
> reconnect by client.12139 10.175.4.16:0/1904855023 after 0.0319998
> 2019-08-20 09:18:20.836 7f6c74353700  0 log_channel(cluster) do_log log
> to syslog
> [… more of these …]
> 2019-08-20 09:18:20.896 7f6c74353700  1 mds.0.6084 reconnect_done
> 2019-08-20 09:18:20.896 7f6c74353700  3 mds.0.6084 request_state up:rejoin
> 2019-08-20 09:18:20.896 7f6c74353700  5 mds.beacon.juju-d0f708-3-lxd-1
> set_want_state: up:reconnect -> up:rejoin
> 2019-08-20 09:18:20.896 7f6c74353700  5 mds.beacon.juju-d0f708-3-lxd-1
> Sending beacon up:rejoin seq 42
> 2019-08-20 09:18:21.880 7f6c74353700  1 mds.juju-d0f708-3-lxd-1 Updating
> MDS map to version 6087 from mon.0
> 2019-08-20 09:18:21.880 7f6c74353700  1 mds.0.6084 handle_mds_map i am
> now mds.0.6084
> 2019-08-20 09:18:21.880 7f6c74353700  1 mds.0.6084 handle_mds_map state
> change up:reconnect --> up:rejoin
> 2019-08-20 09:18:21.880 7f6c74353700  1 mds.0.6084 rejoin_start
> 2019-08-20 09:18:21.880 7f6c74353700  1 mds.0.6084 rejoin_joint_start
> 2019-08-20 09:18:21.880 7f6c74353700  5 mds.juju-d0f708-3-lxd-1
> handle_mds_map old map epoch 6087 <= 6087, discarding
> 2019-08-20 09:18:21.880 7f6c76357700  5 mds.beacon.juju-d0f708-3-lxd-1
> received beacon reply up:rejoin seq 42 rtt 0.983993
> 2019-08-20 09:18:21.888 7f6c6c95c700  5 mds.0.log _submit_thread
> 109124040146~3684 : ESessions 70 opens cmapv 968060
> 2019-08-20 09:18:21.956 7f6c74353700  5 mds.0.cache open_snaprealms has
> unconnected snaprealm:
> 2019-08-20 09:18:21.956 7f6c74353700  5 mds.0.cache  0x10000000045
> {client.79271/21}
> 2019-08-20 09:18:21.956 7f6c74353700  5 mds.0.cache  0x10000003f05
> {client.79283/27}
> 2019-08-20 09:18:21.956 7f6c74353700  1 mds.0.6084 rejoin_done
> 2019-08-20 09:18:21.956 7f6c74353700  3 mds.0.6084 request_state up:active
> 2019-08-20 09:18:21.956 7f6c74353700  5 mds.beacon.juju-d0f708-3-lxd-1
> set_want_state: up:rejoin -> up:active
> 2019-08-20 09:18:21.956 7f6c74353700  5 mds.beacon.juju-d0f708-3-lxd-1
> Sending beacon up:active seq 43
> 2019-08-20 09:18:22.516 7f6c74353700  3 mds.0.server
> handle_client_session client_session(request_renewcaps seq 77763) from
> client.12127
> 2019-08-20 09:18:22.916 7f6c74353700  1 mds.juju-d0f708-3-lxd-1 Updating
> MDS map to version 6088 from mon.0
> 2019-08-20 09:18:22.916 7f6c74353700  1 mds.0.6084 handle_mds_map i am
> now mds.0.6084
> 2019-08-20 09:18:22.916 7f6c74353700  1 mds.0.6084 handle_mds_map state
> change up:rejoin --> up:active
> 2019-08-20 09:18:22.916 7f6c74353700  1 mds.0.6084 recovery_done --
> successful recovery!
> 2019-08-20 09:18:22.916 7f6c74353700  1 mds.0.6084 active_start
> 2019-08-20 09:18:22.916 7f6c76357700  5 mds.beacon.juju-d0f708-3-lxd-1
> received beacon reply up:active seq 43 rtt 0.959993
> 2019-08-20 09:18:22.916 7f6c74353700  4 mds.0.6084
> set_osd_epoch_barrier: epoch=7444
> 2019-08-20 09:18:22.920 7f6c74353700  4 mds.0.server
> handle_client_request client_request(client.89616:241 lssnap
> #0x10000000000 2019-08-20 09:17:49.274589 RETRY=1 caller_uid=0,
> caller_gid=0{}) v2
> 2019-08-20 09:18:22.920 7f6c74353700  5 mds.0.server waiting for root
> 2019-08-20 09:18:22.920 7f6c74353700  4 mds.0.server
> handle_client_request client_request(client.48511:30590 getattr
> pAsLsXsFs #0x10000007b25 2019-08-20 09:17:54.991938 RETRY=1
> caller_uid=0, caller_gid=0{}) v2
> 2019-08-20 09:18:22.920 7f6c74353700  5 mds.0.server waiting for root
> 2019-08-20 09:18:22.920 7f6c74353700  4 mds.0.server
> handle_client_request client_request(client.13108:3485788 getattr Fs
> #0x10000002b59 2019-08-20 09:17:47.771282 RETRY=1 caller_uid=1000,
> caller_gid=1000{}) v2
> 2019-08-20 09:18:22.920 7f6c74353700  5 mds.0.server waiting for root
> 2019-08-20 09:18:22.920 7f6c74353700  4 mds.0.server
> handle_client_request client_request(client.12139:3732538 getattr
> pAsLsXsFs #0x10000002b59 2019-08-20 09:17:47.563937 RETRY=1
> caller_uid=0, caller_gid=0{}) v2
> [etc. …]
>
> ceph.log on a mon:
>
>
> 2019-08-20 09:16:42.867 7f5a76512700  0 log_channel(cluster) log [DBG] :
> fsmap ceph-fs-2/2/2 up
> {0=juju-d0f708-9-lxd-1=up:active,1=juju-d0f708-10-lxd-1=up:active}, 2
> up:standby-replay
> 2019-08-20 09:16:42.867 7f5a76512700  0 log_channel(cluster) do_log log
> to syslog
> 2019-08-20 09:17:25.599 7f5a76512700  0 mon.ceph-mon-01@0(leader).mds
> e6083 new map
> 2019-08-20 09:17:25.599 7f5a76512700  0 mon.ceph-mon-01@0(leader).mds
> e6083 print_map
> e6083
> enable_multiple, ever_enabled_multiple: 0,0
> compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable
> ranges,3=default file layouts on dirs,4=dir inode in separate
> object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no
> anchor table,9=file layout v2,10=snaprealm v2}
> legacy client fscid: 1
>
> Filesystem 'ceph-fs' (1)
> fs_name ceph-fs
> epoch   6083
> flags   12
> created 2019-08-05 12:21:23.208718
> modified        2019-08-20 09:17:25.565361
> tableserver     0
> root    0
> session_timeout 60
> session_autoclose       300
> max_file_size   1099511627776
> min_compat_client       -1 (unspecified)
> last_failure    0
> last_failure_osd_epoch  7444
> compat  compat={},rocompat={},incompat={1=base v0.20,2=client writeable
> ranges,3=default file layouts on dirs,4=dir inode in separate
> object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no
> anchor table,9=file layout v2,10=snaprealm v2}
> max_mds 2
> in      0,1
> up      {0=89667,1=89673}
> failed
> damaged
> stopped
> data_pools      [2,3,4,5]
> metadata_pool   1
> inline_data     disabled
> balancer
> standby_count_wanted    1
> 89667:  172.28.9.20:6800/2237168008 'juju-d0f708-9-lxd-1' mds.0.6070
> up:active seq 28 export_targets=1
> 89679:  172.28.9.19:6800/1509831355 'juju-d0f708-3-lxd-1' mds.0.0
> up:standby-replay seq 1
> 89673:  172.28.9.21:6800/2008779502 'juju-d0f708-10-lxd-1' mds.1.6073
> up:active seq 5 export_targets=0
> 89685:  172.28.9.18:6800/1458048941 'juju-d0f708-5-lxd-1' mds.1.0
> up:standby-replay seq 1
>
>
>
> 2019-08-20 09:17:25.599 7f5a76512700  0 log_channel(cluster) log [DBG] :
> fsmap ceph-fs-2/2/2 up
> {0=juju-d0f708-9-lxd-1=up:active,1=juju-d0f708-10-lxd-1=up:active}, 2
> up:standby-replay
> 2019-08-20 09:17:25.599 7f5a76512700  0 log_channel(cluster) do_log log
> to syslog
> 2019-08-20 09:18:06.518 7f5a80580700  0 log_channel(audit) log [DBG] :
> from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
> 2019-08-20 09:18:06.518 7f5a80580700  0 log_channel(audit) do_log log to
> syslog
> 2019-08-20 09:18:06.518 7f5a80580700  0 log_channel(audit) log [DBG] :
> from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
> 2019-08-20 09:18:06.518 7f5a80580700  0 log_channel(audit) do_log log to
> syslog
> 2019-08-20 09:18:18.734 7f5a7cd1f700  0 log_channel(cluster) log [WRN] :
> daemon mds.juju-d0f708-9-lxd-1 is not responding, replacing it as rank 0
> with standby daemon mds.juju-d0f708-3-lxd-1
> 2019-08-20 09:18:18.734 7f5a7cd1f700  0 log_channel(cluster) do_log log
> to syslog
> 2019-08-20 09:18:18.738 7f5a7cd1f700  0 log_channel(cluster) log [WRN] :
> Health check failed: 1 filesystem is degraded (FS_DEGRADED)
> 2019-08-20 09:18:18.738 7f5a7cd1f700  0 log_channel(cluster) do_log log
> to syslog
> 2019-08-20 09:18:18.758 7f5a76512700  0 log_channel(cluster) log [DBG] :
> osdmap e7445: 389 total, 389 up, 389 in
> 2019-08-20 09:18:18.758 7f5a76512700  0 log_channel(cluster) do_log log
> to syslog
> 2019-08-20 09:18:18.774 7f5a76512700  0 mon.ceph-mon-01@0(leader).mds
> e6084 new map
> 2019-08-20 09:18:18.774 7f5a76512700  0 mon.ceph-mon-01@0(leader).mds
> e6084 print_map
> e6084
> enable_multiple, ever_enabled_multiple: 0,0
> compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable
> ranges,3=default file layouts on dirs,4=dir inode in separate
> object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no
> anchor table,9=file layout v2,10=snaprealm v2}
> legacy client fscid: 1
>
> Filesystem 'ceph-fs' (1)
> fs_name ceph-fs
> epoch   6084
> flags   12
> created 2019-08-05 12:21:23.208718
> modified        2019-08-20 09:18:18.742755
> tableserver     0
> root    0
> session_timeout 60
> session_autoclose       300
> max_file_size   1099511627776
> min_compat_client       -1 (unspecified)
> last_failure    0
> last_failure_osd_epoch  7445
> compat  compat={},rocompat={},incompat={1=base v0.20,2=client writeable
> ranges,3=default file layouts on dirs,4=dir inode in separate
> object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no
> anchor table,9=file layout v2,10=snaprealm v2}
> max_mds 2
> in      0,1
> up      {0=89679,1=89673}
> failed
> damaged
> stopped
> data_pools      [2,3,4,5]
> metadata_pool   1
> inline_data     disabled
> balancer
> standby_count_wanted    1
> 89679:  172.28.9.19:6800/1509831355 'juju-d0f708-3-lxd-1' mds.0.6084
> up:replay seq 1
> 89673:  172.28.9.21:6800/2008779502 'juju-d0f708-10-lxd-1' mds.1.6073
> up:active seq 5 export_targets=0
> 89685:  172.28.9.18:6800/1458048941 'juju-d0f708-5-lxd-1' mds.1.0
> up:standby-replay seq 1
>
>
>
> 2019-08-20 09:18:18.774 7f5a76512700  0 log_channel(cluster) log [DBG] :
> fsmap ceph-fs-2/2/2 up
> {0=juju-d0f708-3-lxd-1=up:replay,1=juju-d0f708-10-lxd-1=up:active}, 1
> up:standby-replay
> 2019-08-20 09:18:18.774 7f5a76512700  0 log_channel(cluster) do_log log
> to syslog
> 2019-08-20 09:18:19.782 7f5a76512700  0 mon.ceph-mon-01@0(leader).mds
> e6085 new map
> 2019-08-20 09:18:19.782 7f5a76512700  0 mon.ceph-mon-01@0(leader).mds
> e6085 print_map
> e6085
> enable_multiple, ever_enabled_multiple: 0,0
> compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable
> ranges,3=default file layouts on dirs,4=dir inode in separate
> object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no
> anchor table,9=file layout v2,10=snaprealm v2}
> legacy client fscid: 1
>
> Filesystem 'ceph-fs' (1)
> fs_name ceph-fs
> epoch   6085
> flags   12
> created 2019-08-05 12:21:23.208718
> modified        2019-08-20 09:18:19.776171
> tableserver     0
> root    0
> session_timeout 60
> session_autoclose       300
> max_file_size   1099511627776
> min_compat_client       -1 (unspecified)
> last_failure    0
> last_failure_osd_epoch  7445
> compat  compat={},rocompat={},incompat={1=base v0.20,2=client writeable
> ranges,3=default file layouts on dirs,4=dir inode in separate
> object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no
> anchor table,9=file layout v2,10=snaprealm v2}
> max_mds 2
> in      0,1
> up      {0=89679,1=89673}
> failed
> damaged
> stopped
> data_pools      [2,3,4,5]
> metadata_pool   1
> inline_data     disabled
> balancer
> standby_count_wanted    1
> 89679:  172.28.9.19:6800/1509831355 'juju-d0f708-3-lxd-1' mds.0.6084
> up:resolve seq 40
> 89673:  172.28.9.21:6800/2008779502 'juju-d0f708-10-lxd-1' mds.1.6073
> up:active seq 5 export_targets=0
> 89685:  172.28.9.18:6800/1458048941 'juju-d0f708-5-lxd-1' mds.1.0
> up:standby-replay seq 1
>
>
>
> 2019-08-20 09:18:19.782 7f5a76512700  0 log_channel(cluster) log [DBG] :
> mds.0 172.28.9.19:6800/1509831355 up:resolve
> 2019-08-20 09:18:19.782 7f5a76512700  0 log_channel(cluster) do_log log
> to syslog
> 2019-08-20 09:18:19.782 7f5a76512700  0 log_channel(cluster) log [DBG] :
> fsmap ceph-fs-2/2/2 up
> {0=juju-d0f708-3-lxd-1=up:resolve,1=juju-d0f708-10-lxd-1=up:active}, 1
> up:standby-replay
> 2019-08-20 09:18:19.782 7f5a76512700  0 log_channel(cluster) do_log log
> to syslog
> 2019-08-20 09:18:20.802 7f5a76512700  0 mon.ceph-mon-01@0(leader).mds
> e6086 new map
> 2019-08-20 09:18:20.802 7f5a76512700  0 mon.ceph-mon-01@0(leader).mds
> e6086 print_map
> e6086
> enable_multiple, ever_enabled_multiple: 0,0
> compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable
> ranges,3=default file layouts on dirs,4=dir inode in separate
> object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no
> anchor table,9=file layout v2,10=snaprealm v2}
> legacy client fscid: 1
>
> Filesystem 'ceph-fs' (1)
> fs_name ceph-fs
> epoch   6086
> flags   12
> created 2019-08-05 12:21:23.208718
> modified        2019-08-20 09:18:20.785775
> tableserver     0
> root    0
> session_timeout 60
> session_autoclose       300
> max_file_size   1099511627776
> min_compat_client       -1 (unspecified)
> last_failure    0
> last_failure_osd_epoch  7445
> compat  compat={},rocompat={},incompat={1=base v0.20,2=client writeable
> ranges,3=default file layouts on dirs,4=dir inode in separate
> object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no
> anchor table,9=file layout v2,10=snaprealm v2}
> max_mds 2
> in      0,1
> up      {0=89679,1=89673}
> failed
> damaged
> stopped
> data_pools      [2,3,4,5]
> metadata_pool   1
> inline_data     disabled
> balancer
> standby_count_wanted    1
> 89679:  172.28.9.19:6800/1509831355 'juju-d0f708-3-lxd-1' mds.0.6084
> up:reconnect seq 41
> 89673:  172.28.9.21:6800/2008779502 'juju-d0f708-10-lxd-1' mds.1.6073
> up:active seq 5 export_targets=0
> 89685:  172.28.9.18:6800/1458048941 'juju-d0f708-5-lxd-1' mds.1.0
> up:standby-replay seq 1
>
>
>
> 2019-08-20 09:18:20.802 7f5a76512700  0 log_channel(cluster) log [DBG] :
> mds.0 172.28.9.19:6800/1509831355 up:reconnect
> 2019-08-20 09:18:20.802 7f5a76512700  0 log_channel(cluster) do_log log
> to syslog
> 2019-08-20 09:18:20.802 7f5a76512700  0 log_channel(cluster) log [DBG] :
> fsmap ceph-fs-2/2/2 up
> {0=juju-d0f708-3-lxd-1=up:reconnect,1=juju-d0f708-10-lxd-1=up:active}, 1
> up:standby-replay
> 2019-08-20 09:18:20.802 7f5a76512700  0 log_channel(cluster) do_log log
> to syslog
> 2019-08-20 09:18:21.834 7f5a7cd1f700  0 log_channel(cluster) log [WRN] :
> Health check failed: 1 MDSs report slow requests (MDS_SLOW_REQUEST)
> 2019-08-20 09:18:21.834 7f5a7cd1f700  0 log_channel(cluster) do_log log
> to syslog
> 2019-08-20 09:18:21.878 7f5a76512700  0 mon.ceph-mon-01@0(leader).mds
> e6087 new map
> 2019-08-20 09:18:21.878 7f5a76512700  0 mon.ceph-mon-01@0(leader).mds
> e6087 print_map
> e6087
> enable_multiple, ever_enabled_multiple: 0,0
> compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable
> ranges,3=default file layouts on dirs,4=dir inode in separate
> object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no
> anchor table,9=file layout v2,10=snaprealm v2}
> legacy client fscid: 1
>
> Filesystem 'ceph-fs' (1)
> fs_name ceph-fs
> epoch   6087
> flags   12
> created 2019-08-05 12:21:23.208718
> modified        2019-08-20 09:18:21.837992
> tableserver     0
> root    0
> session_timeout 60
> session_autoclose       300
> max_file_size   1099511627776
> min_compat_client       -1 (unspecified)
> last_failure    0
> last_failure_osd_epoch  7445
> compat  compat={},rocompat={},incompat={1=base v0.20,2=client writeable
> ranges,3=default file layouts on dirs,4=dir inode in separate
> object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no
> anchor table,9=file layout v2,10=snaprealm v2}
> max_mds 2
> in      0,1
> up      {0=89679,1=89673}
> failed
> damaged
> stopped
> data_pools      [2,3,4,5]
> metadata_pool   1
> inline_data     disabled
> balancer
> standby_count_wanted    1
> 89679:  172.28.9.19:6800/1509831355 'juju-d0f708-3-lxd-1' mds.0.6084
> up:rejoin seq 42
> 89673:  172.28.9.21:6800/2008779502 'juju-d0f708-10-lxd-1' mds.1.6073
> up:active seq 45 export_targets=0
> 89685:  172.28.9.18:6800/1458048941 'juju-d0f708-5-lxd-1' mds.1.0
> up:standby-replay seq 1
>
>
>
> 2019-08-20 09:18:21.882 7f5a76512700  0 log_channel(cluster) log [DBG] :
> mds.1 172.28.9.21:6800/2008779502 up:active
> 2019-08-20 09:18:21.882 7f5a76512700  0 log_channel(cluster) do_log log
> to syslog
> 2019-08-20 09:18:21.882 7f5a76512700  0 log_channel(cluster) log [DBG] :
> mds.0 172.28.9.19:6800/1509831355 up:rejoin
> 2019-08-20 09:18:21.882 7f5a76512700  0 log_channel(cluster) do_log log
> to syslog
> 2019-08-20 09:18:21.882 7f5a76512700  0 log_channel(cluster) log [DBG] :
> fsmap ceph-fs-2/2/2 up
> {0=juju-d0f708-3-lxd-1=up:rejoin,1=juju-d0f708-10-lxd-1=up:active}, 1
> up:standby-replay
> 2019-08-20 09:18:21.882 7f5a76512700  0 log_channel(cluster) do_log log
> to syslog
> 2019-08-20 09:18:21.958 7f5a7a51a700  0 log_channel(cluster) log [INF] :
> daemon mds.juju-d0f708-3-lxd-1 is now active in filesystem ceph-fs as rank 0
> 2019-08-20 09:18:21.958 7f5a7a51a700  0 log_channel(cluster) do_log log
> to syslog
> 2019-08-20 09:18:22.878 7f5a7cd1f700  0 log_channel(cluster) log [INF] :
> Health check cleared: FS_DEGRADED (was: 1 filesystem is degraded)
> 2019-08-20 09:18:22.878 7f5a7cd1f700  0 log_channel(cluster) do_log log
> to syslog
> 2019-08-20 09:18:22.914 7f5a76512700  0 mon.ceph-mon-01@0(leader).mds
> e6088 new map
> 2019-08-20 09:18:22.914 7f5a76512700  0 mon.ceph-mon-01@0(leader).mds
> e6088 print_map
> e6088
> enable_multiple, ever_enabled_multiple: 0,0
> compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable
> ranges,3=default file layouts on dirs,4=dir inode in separate
> object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no
> anchor table,9=file layout v2,10=snaprealm v2}
> legacy client fscid: 1
>
> Filesystem 'ceph-fs' (1)
> fs_name ceph-fs
> epoch   6088
> flags   12
> created 2019-08-05 12:21:23.208718
> modified        2019-08-20 09:18:22.882183
> tableserver     0
> root    0
> session_timeout 60
> session_autoclose       300
> max_file_size   1099511627776
> min_compat_client       -1 (unspecified)
> last_failure    0
> last_failure_osd_epoch  7445
> compat  compat={},rocompat={},incompat={1=base v0.20,2=client writeable
> ranges,3=default file layouts on dirs,4=dir inode in separate
> object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no
> anchor table,9=file layout v2,10=snaprealm v2}
> max_mds 2
> in      0,1
> up      {0=89679,1=89673}
> failed
> damaged
> stopped
> data_pools      [2,3,4,5]
> metadata_pool   1
> inline_data     disabled
> balancer
> standby_count_wanted    1
> 89679:  172.28.9.19:6800/1509831355 'juju-d0f708-3-lxd-1' mds.0.6084
> up:active seq 43
> 89673:  172.28.9.21:6800/2008779502 'juju-d0f708-10-lxd-1' mds.1.6073
> up:active seq 45 export_targets=0
> 89685:  172.28.9.18:6800/1458048941 'juju-d0f708-5-lxd-1' mds.1.0
> up:standby-replay seq 1
>
>
>
> 2019-08-20 09:18:22.914 7f5a76512700  0 log_channel(cluster) log [DBG] :
> mds.0 172.28.9.19:6800/1509831355 up:active
> 2019-08-20 09:18:22.914 7f5a76512700  0 log_channel(cluster) do_log log
> to syslog
> 2019-08-20 09:18:22.914 7f5a76512700  0 log_channel(cluster) log [DBG] :
> fsmap ceph-fs-2/2/2 up
> {0=juju-d0f708-3-lxd-1=up:active,1=juju-d0f708-10-lxd-1=up:active}, 1
> up:standby-replay
> 2019-08-20 09:18:22.914 7f5a76512700  0 log_channel(cluster) do_log log
> to syslog
> 2019-08-20 09:18:23.646 7f5a7a51a700  0 mon.ceph-mon-01@0(leader) e2
> handle_command mon_command({"prefix": "mds metadata", "who":
> "juju-d0f708-9-lxd-1"} v 0) v1
> 2019-08-20 09:18:23.646 7f5a7a51a700  0 log_channel(audit) log [DBG] :
> from='mgr.86070 172.28.9.11:0/1868533' entity='mgr.ceph-mon-01'
> cmd=[{"prefix": "mds metadata", "who": "juju-d0f708-9-lxd-1"}]: dispatch
> 2019-08-20 09:18:23.646 7f5a7a51a700  0 log_channel(audit) do_log log to
> syslog
> 2019-08-20 09:18:53.745 7f5a7cd1f700  0 log_channel(cluster) log [WRN] :
> daemon mds.juju-d0f708-10-lxd-1 is not responding, replacing it as rank
> 1 with standby daemon mds.juju-d0f708-5-lxd-1
> 2019-08-20 09:18:53.745 7f5a7cd1f700  0 log_channel(cluster) do_log log
> to syslog
> 2019-08-20 09:18:53.773 7f5a7cd1f700  0 log_channel(cluster) log [WRN] :
> Health check failed: 1 filesystem is degraded (FS_DEGRADED)
> 2019-08-20 09:18:53.773 7f5a7cd1f700  0 log_channel(cluster) do_log log
> to syslog
> 2019-08-20 09:18:53.773 7f5a7cd1f700  0 log_channel(cluster) log [WRN] :
> Health check failed: insufficient standby MDS daemons available
> (MDS_INSUFFICIENT_STANDBY)
> 2019-08-20 09:18:53.773 7f5a7cd1f700  0 log_channel(cluster) do_log log
> to syslog
> 2019-08-20 09:18:53.773 7f5a7cd1f700  0 log_channel(cluster) log [INF] :
> Health check cleared: MDS_SLOW_REQUEST (was: 1 MDSs report slow requests)
> 2019-08-20 09:18:53.773 7f5a7cd1f700  0 log_channel(cluster) do_log log
> to syslog
> 2019-08-20 09:18:53.805 7f5a76512700  0 log_channel(cluster) log [DBG] :
> osdmap e7446: 389 total, 389 up, 389 in
> 2019-08-20 09:18:53.805 7f5a76512700  0 log_channel(cluster) do_log log
> to syslog
> 2019-08-20 09:18:53.817 7f5a76512700  0 mon.ceph-mon-01@0(leader).mds
> e6089 new map
> 2019-08-20 09:18:53.817 7f5a76512700  0 mon.ceph-mon-01@0(leader).mds
> e6089 print_map
> e6089
> enable_multiple, ever_enabled_multiple: 0,0
> compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable
> ranges,3=default file layouts on dirs,4=dir inode in separate
> object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no
> anchor table,9=file layout v2,10=snaprealm v2}
> legacy client fscid: 1
>
> Filesystem 'ceph-fs' (1)
> fs_name ceph-fs
> epoch   6089
> flags   12
> created 2019-08-05 12:21:23.208718
> modified        2019-08-20 09:18:53.778337
> tableserver     0
> root    0
> session_timeout 60
> session_autoclose       300
> max_file_size   1099511627776
> min_compat_client       -1 (unspecified)
> last_failure    0
> last_failure_osd_epoch  7446
> compat  compat={},rocompat={},incompat={1=base v0.20,2=client writeable
> ranges,3=default file layouts on dirs,4=dir inode in separate
> object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no
> anchor table,9=file layout v2,10=snaprealm v2}
> max_mds 2
> in      0,1
> up      {0=89679,1=89685}
> failed
> damaged
> stopped
> data_pools      [2,3,4,5]
> metadata_pool   1
> inline_data     disabled
> balancer
> standby_count_wanted    1
> 89679:  172.28.9.19:6800/1509831355 'juju-d0f708-3-lxd-1' mds.0.6084
> up:active seq 43
> 89685:  172.28.9.18:6800/1458048941 'juju-d0f708-5-lxd-1' mds.1.6089
> up:replay seq 1
>
>
>
> 2019-08-20 09:18:53.821 7f5a76512700  0 log_channel(cluster) log [DBG] :
> fsmap ceph-fs-2/2/2 up
> {0=juju-d0f708-3-lxd-1=up:active,1=juju-d0f708-5-lxd-1=up:replay}
> 2019-08-20 09:18:53.821 7f5a76512700  0 log_channel(cluster) do_log log
> to syslog
> 2019-08-20 09:18:54.865 7f5a76512700  0 mon.ceph-mon-01@0(leader).mds
> e6090 new map
> 2019-08-20 09:18:54.865 7f5a76512700  0 mon.ceph-mon-01@0(leader).mds
> e6090 print_map
> e6090
> enable_multiple, ever_enabled_multiple: 0,0
> compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable
> ranges,3=default file layouts on dirs,4=dir inode in separate
> object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no
> anchor table,9=file layout v2,10=snaprealm v2}
> legacy client fscid: 1
>
> Filesystem 'ceph-fs' (1)
> fs_name ceph-fs
> epoch   6090
> flags   12
> created 2019-08-05 12:21:23.208718
> modified        2019-08-20 09:18:54.823816
> tableserver     0
> root    0
> session_timeout 60
> session_autoclose       300
> max_file_size   1099511627776
> min_compat_client       -1 (unspecified)
> last_failure    0
> last_failure_osd_epoch  7446
> compat  compat={},rocompat={},incompat={1=base v0.20,2=client writeable
> ranges,3=default file layouts on dirs,4=dir inode in separate
> object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no
> anchor table,9=file layout v2,10=snaprealm v2}
> max_mds 2
> in      0,1
> up      {0=89679,1=89685}
> failed
> damaged
> stopped
> data_pools      [2,3,4,5]
> metadata_pool   1
> inline_data     disabled
> balancer
> standby_count_wanted    1
> 89679:  172.28.9.19:6800/1509831355 'juju-d0f708-3-lxd-1' mds.0.6084
> up:active seq 43
> 89685:  172.28.9.18:6800/1458048941 'juju-d0f708-5-lxd-1' mds.1.6089
> up:resolve seq 34
>
>
>
> 2019-08-20 09:18:54.865 7f5a76512700  0 log_channel(cluster) log [DBG] :
> mds.1 172.28.9.18:6800/1458048941 up:resolve
> 2019-08-20 09:18:54.865 7f5a76512700  0 log_channel(cluster) do_log log
> to syslog
> 2019-08-20 09:18:54.865 7f5a76512700  0 log_channel(cluster) log [DBG] :
> fsmap ceph-fs-2/2/2 up
> {0=juju-d0f708-3-lxd-1=up:active,1=juju-d0f708-5-lxd-1=up:resolve}
> 2019-08-20 09:18:54.865 7f5a76512700  0 log_channel(cluster) do_log log
> to syslog
> 2019-08-20 09:18:55.781 7f5a7a51a700  0 mon.ceph-mon-01@0(leader) e2
> handle_command mon_command({"prefix": "mds metadata", "who":
> "juju-d0f708-10-lxd-1"} v 0) v1
> 2019-08-20 09:18:55.781 7f5a7a51a700  0 log_channel(audit) log [DBG] :
> from='mgr.86070 172.28.9.11:0/1868533' entity='mgr.ceph-mon-01'
> cmd=[{"prefix": "mds metadata", "who": "juju-d0f708-10-lxd-1"}]: dispatch
> 2019-08-20 09:18:55.781 7f5a7a51a700  0 log_channel(audit) do_log log to
> syslog
> 2019-08-20 09:18:55.901 7f5a76512700  0 mon.ceph-mon-01@0(leader).mds
> e6091 new map
> 2019-08-20 09:18:55.901 7f5a76512700  0 mon.ceph-mon-01@0(leader).mds
> e6091 print_map
> e6091
> enable_multiple, ever_enabled_multiple: 0,0
> compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable
> ranges,3=default file layouts on dirs,4=dir inode in separate
> object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no
> anchor table,9=file layout v2,10=snaprealm v2}
> legacy client fscid: 1
>
> Filesystem 'ceph-fs' (1)
> fs_name ceph-fs
> epoch   6091
> flags   12
> created 2019-08-05 12:21:23.208718
> modified        2019-08-20 09:18:55.868990
> tableserver     0
> root    0
> session_timeout 60
> session_autoclose       300
> max_file_size   1099511627776
> min_compat_client       -1 (unspecified)
> last_failure    0
> last_failure_osd_epoch  7446
> compat  compat={},rocompat={},incompat={1=base v0.20,2=client writeable
> ranges,3=default file layouts on dirs,4=dir inode in separate
> object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no
> anchor table,9=file layout v2,10=snaprealm v2}
> max_mds 2
> in      0,1
> up      {0=89679,1=89685}
> failed
> damaged
> stopped
> data_pools      [2,3,4,5]
> metadata_pool   1
> inline_data     disabled
> balancer
> standby_count_wanted    1
> 89679:  172.28.9.19:6800/1509831355 'juju-d0f708-3-lxd-1' mds.0.6084
> up:active seq 43
> 89685:  172.28.9.18:6800/1458048941 'juju-d0f708-5-lxd-1' mds.1.6089
> up:reconnect seq 35
>
>
>
> 2019-08-20 09:18:55.901 7f5a76512700  0 log_channel(cluster) log [DBG] :
> mds.1 172.28.9.18:6800/1458048941 up:reconnect
> 2019-08-20 09:18:55.901 7f5a76512700  0 log_channel(cluster) do_log log
> to syslog
> 2019-08-20 09:18:55.901 7f5a76512700  0 log_channel(cluster) log [DBG] :
> fsmap ceph-fs-2/2/2 up
> {0=juju-d0f708-3-lxd-1=up:active,1=juju-d0f708-5-lxd-1=up:reconnect}
> 2019-08-20 09:18:55.901 7f5a76512700  0 log_channel(cluster) do_log log
> to syslog
> 2019-08-20 09:18:56.977 7f5a76512700  0 mon.ceph-mon-01@0(leader).mds
> e6092 new map
> 2019-08-20 09:18:56.977 7f5a76512700  0 mon.ceph-mon-01@0(leader).mds
> e6092 print_map
> e6092
> enable_multiple, ever_enabled_multiple: 0,0
> compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable
> ranges,3=default file layouts on dirs,4=dir inode in separate
> object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no
> anchor table,9=file layout v2,10=snaprealm v2}
> legacy client fscid: 1
>
> Filesystem 'ceph-fs' (1)
> fs_name ceph-fs
> epoch   6092
> flags   12
> created 2019-08-05 12:21:23.208718
> modified        2019-08-20 09:18:56.937720
> tableserver     0
> root    0
> session_timeout 60
> session_autoclose       300
> max_file_size   1099511627776
> min_compat_client       -1 (unspecified)
> last_failure    0
> last_failure_osd_epoch  7446
> compat  compat={},rocompat={},incompat={1=base v0.20,2=client writeable
> ranges,3=default file layouts on dirs,4=dir inode in separate
> object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no
> anchor table,9=file layout v2,10=snaprealm v2}
> max_mds 2
> in      0,1
> up      {0=89679,1=89685}
> failed
> damaged
> stopped
> data_pools      [2,3,4,5]
> metadata_pool   1
> inline_data     disabled
> balancer
> standby_count_wanted    1
> 89679:  172.28.9.19:6800/1509831355 'juju-d0f708-3-lxd-1' mds.0.6084
> up:active seq 43
> 89685:  172.28.9.18:6800/1458048941 'juju-d0f708-5-lxd-1' mds.1.6089
> up:rejoin seq 36
>
>
>
> 2019-08-20 09:18:56.977 7f5a76512700  0 log_channel(cluster) log [DBG] :
> mds.1 172.28.9.18:6800/1458048941 up:rejoin
> 2019-08-20 09:18:56.977 7f5a76512700  0 log_channel(cluster) do_log log
> to syslog
> 2019-08-20 09:18:56.977 7f5a76512700  0 log_channel(cluster) log [DBG] :
> fsmap ceph-fs-2/2/2 up
> {0=juju-d0f708-3-lxd-1=up:active,1=juju-d0f708-5-lxd-1=up:rejoin}
> 2019-08-20 09:18:56.977 7f5a76512700  0 log_channel(cluster) do_log log
> to syslog
> 2019-08-20 09:18:57.077 7f5a7a51a700  0 log_channel(cluster) log [INF] :
> daemon mds.juju-d0f708-5-lxd-1 is now active in filesystem ceph-fs as rank 1
> 2019-08-20 09:18:57.077 7f5a7a51a700  0 log_channel(cluster) do_log log
> to syslog
> 2019-08-20 09:18:57.993 7f5a7cd1f700  0 log_channel(cluster) log [WRN] :
> Health check failed: 1 MDSs report slow requests (MDS_SLOW_REQUEST)
> 2019-08-20 09:18:57.993 7f5a7cd1f700  0 log_channel(cluster) do_log log
> to syslog
> 2019-08-20 09:18:57.993 7f5a7cd1f700  0 log_channel(cluster) log [INF] :
> Health check cleared: FS_DEGRADED (was: 1 filesystem is degraded)
> 2019-08-20 09:18:57.993 7f5a7cd1f700  0 log_channel(cluster) do_log log
> to syslog
> 2019-08-20 09:18:58.037 7f5a76512700  0 mon.ceph-mon-01@0(leader).mds
> e6093 new map
> 2019-08-20 09:18:58.037 7f5a76512700  0 mon.ceph-mon-01@0(leader).mds
> e6093 print_map
> e6093
> enable_multiple, ever_enabled_multiple: 0,0
> compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable
> ranges,3=default file layouts on dirs,4=dir inode in separate
> object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no
> anchor table,9=file layout v2,10=snaprealm v2}
> legacy client fscid: 1
>
> Filesystem 'ceph-fs' (1)
> fs_name ceph-fs
> epoch   6093
> flags   12
> created 2019-08-05 12:21:23.208718
> modified        2019-08-20 09:18:57.998584
> tableserver     0
> root    0
> session_timeout 60
> session_autoclose       300
> max_file_size   1099511627776
> min_compat_client       -1 (unspecified)
> last_failure    0
> last_failure_osd_epoch  7446
> compat  compat={},rocompat={},incompat={1=base v0.20,2=client writeable
> ranges,3=default file layouts on dirs,4=dir inode in separate
> object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no
> anchor table,9=file layout v2,10=snaprealm v2}
> max_mds 2
> in      0,1
> up      {0=89679,1=89685}
> failed
> damaged
> stopped
> data_pools      [2,3,4,5]
> metadata_pool   1
> inline_data     disabled
> balancer
> standby_count_wanted    1
> 89679:  172.28.9.19:6800/1509831355 'juju-d0f708-3-lxd-1' mds.0.6084
> up:active seq 52
> 89685:  172.28.9.18:6800/1458048941 'juju-d0f708-5-lxd-1' mds.1.6089
> up:active seq 37
>
>
>
> 2019-08-20 09:18:58.037 7f5a76512700  0 log_channel(cluster) log [DBG] :
> mds.1 172.28.9.18:6800/1458048941 up:active
> 2019-08-20 09:18:58.037 7f5a76512700  0 log_channel(cluster) do_log log
> to syslog
> 2019-08-20 09:18:58.037 7f5a76512700  0 log_channel(cluster) log [DBG] :
> mds.0 172.28.9.19:6800/1509831355 up:active
> 2019-08-20 09:18:58.037 7f5a76512700  0 log_channel(cluster) do_log log
> to syslog
> 2019-08-20 09:18:58.037 7f5a76512700  0 log_channel(cluster) log [DBG] :
> fsmap ceph-fs-2/2/2 up
> {0=juju-d0f708-3-lxd-1=up:active,1=juju-d0f708-5-lxd-1=up:active}
> 2019-08-20 09:18:58.037 7f5a76512700  0 log_channel(cluster) do_log log
> to syslog
> 2019-08-20 09:19:01.957 7f5a7a51a700  0 log_channel(cluster) log [INF] :
> MDS health message cleared (mds.0): 4 slow requests are blocked > 30 secs
> 2019-08-20 09:19:01.957 7f5a7a51a700  0 log_channel(cluster) do_log log
> to syslog
> 2019-08-20 09:19:02.293 7f5a7cd1f700  0 log_channel(cluster) log [INF] :
> Health check cleared: MDS_SLOW_REQUEST (was: 1 MDSs report slow requests)
> 2019-08-20 09:19:02.293 7f5a7cd1f700  0 log_channel(cluster) do_log log
> to syslog
> 2019-08-20 09:19:02.329 7f5a76512700  0 mon.ceph-mon-01@0(leader).mds
> e6094 new map
> 2019-08-20 09:19:02.329 7f5a76512700  0 mon.ceph-mon-01@0(leader).mds
> e6094 print_map
> e6094
> enable_multiple, ever_enabled_multiple: 0,0
> compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable
> ranges,3=default file layouts on dirs,4=dir inode in separate
> object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no
> anchor table,9=file layout v2,10=snaprealm v2}
> legacy client fscid: 1
>
> Filesystem 'ceph-fs' (1)
> fs_name ceph-fs
> epoch   6094
> flags   12
> created 2019-08-05 12:21:23.208718
> modified        2019-08-20 09:19:02.297056
> tableserver     0
> root    0
> session_timeout 60
> session_autoclose       300
> max_file_size   1099511627776
> min_compat_client       -1 (unspecified)
> last_failure    0
> last_failure_osd_epoch  7446
> compat  compat={},rocompat={},incompat={1=base v0.20,2=client writeable
> ranges,3=default file layouts on dirs,4=dir inode in separate
> object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no
> anchor table,9=file layout v2,10=snaprealm v2}
> max_mds 2
> in      0,1
> up      {0=89679,1=89685}
> failed
> damaged
> stopped
> data_pools      [2,3,4,5]
> metadata_pool   1
> inline_data     disabled
> balancer
> standby_count_wanted    1
> 89679:  172.28.9.19:6800/1509831355 'juju-d0f708-3-lxd-1' mds.0.6084
> up:active seq 53
> 89685:  172.28.9.18:6800/1458048941 'juju-d0f708-5-lxd-1' mds.1.6089
> up:active seq 37
>
>
>
> 2019-08-20 09:19:02.329 7f5a76512700  0 log_channel(cluster) log [DBG] :
> mds.0 172.28.9.19:6800/1509831355 up:active
> 2019-08-20 09:19:02.329 7f5a76512700  0 log_channel(cluster) do_log log
> to syslog
> 2019-08-20 09:19:02.329 7f5a76512700  0 log_channel(cluster) log [DBG] :
> fsmap ceph-fs-2/2/2 up
> {0=juju-d0f708-3-lxd-1=up:active,1=juju-d0f708-5-lxd-1=up:active}
> 2019-08-20 09:19:02.329 7f5a76512700  0 log_channel(cluster) do_log log
> to syslog
> 2019-08-20 09:19:07.013 7f5a76512700  0 mon.ceph-mon-01@0(leader).mds
> e6095 new map
> 2019-08-20 09:19:07.013 7f5a76512700  0 mon.ceph-mon-01@0(leader).mds
> e6095 print_map
> e6095
> enable_multiple, ever_enabled_multiple: 0,0
> compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable
> ranges,3=default file layouts on dirs,4=dir inode in separate
> object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no
> anchor table,9=file layout v2,10=snaprealm v2}
> legacy client fscid: 1
>
> Filesystem 'ceph-fs' (1)
> fs_name ceph-fs
> epoch   6095
> flags   12
> created 2019-08-05 12:21:23.208718
> modified        2019-08-20 09:19:06.975109
> tableserver     0
> root    0
> session_timeout 60
> session_autoclose       300
> max_file_size   1099511627776
> min_compat_client       -1 (unspecified)
> last_failure    0
> last_failure_osd_epoch  7446
> compat  compat={},rocompat={},incompat={1=base v0.20,2=client writeable
> ranges,3=default file layouts on dirs,4=dir inode in separate
> object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no
> anchor table,9=file layout v2,10=snaprealm v2}
> max_mds 2
> in      0,1
> up      {0=89679,1=89685}
> failed
> damaged
> stopped
> data_pools      [2,3,4,5]
> metadata_pool   1
> inline_data     disabled
> balancer
> standby_count_wanted    1
> 89679:  172.28.9.19:6800/1509831355 'juju-d0f708-3-lxd-1' mds.0.6084
> up:active seq 53
> 89685:  172.28.9.18:6800/1458048941 'juju-d0f708-5-lxd-1' mds.1.6089
> up:active seq 37 export_targets=0
>
>
>
> 2019-08-20 09:19:07.013 7f5a76512700  0 log_channel(cluster) log [DBG] :
> fsmap ceph-fs-2/2/2 up
> {0=juju-d0f708-3-lxd-1=up:active,1=juju-d0f708-5-lxd-1=up:active}
> 2019-08-20 09:19:07.013 7f5a76512700  0 log_channel(cluster) do_log log
> to syslog
> 2019-08-20 09:19:35.696 7f5a76512700  0 mon.ceph-mon-01@0(leader).mds
> e6096 new map
> 2019-08-20 09:19:35.696 7f5a76512700  0 mon.ceph-mon-01@0(leader).mds
> e6096 print_map
> e6096
> enable_multiple, ever_enabled_multiple: 0,0
> compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable
> ranges,3=default file layouts on dirs,4=dir inode in separate
> object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no
> anchor table,9=file layout v2,10=snaprealm v2}
> legacy client fscid: 1
>
> Filesystem 'ceph-fs' (1)
> fs_name ceph-fs
> epoch   6096
> flags   12
> created 2019-08-05 12:21:23.208718
> modified        2019-08-20 09:19:35.657067
> tableserver     0
> root    0
> session_timeout 60
> session_autoclose       300
> max_file_size   1099511627776
> min_compat_client       -1 (unspecified)
> last_failure    0
> last_failure_osd_epoch  7446
> compat  compat={},rocompat={},incompat={1=base v0.20,2=client writeable
> ranges,3=default file layouts on dirs,4=dir inode in separate
> object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no
> anchor table,9=file layout v2,10=snaprealm v2}
> max_mds 2
> in      0,1
> up      {0=89679,1=89685}
> failed
> damaged
> stopped
> data_pools      [2,3,4,5]
> metadata_pool   1
> inline_data     disabled
> balancer
> standby_count_wanted    1
> 89679:  172.28.9.19:6800/1509831355 'juju-d0f708-3-lxd-1' mds.0.6084
> up:active seq 53 export_targets=1
> 89685:  172.28.9.18:6800/1458048941 'juju-d0f708-5-lxd-1' mds.1.6089
> up:active seq 37 export_targets=0
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux