Re: cephfs causing high load on vm, taking down 15 min later another cephfs vm

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I have got this today again????? I cannot unmount the filesystem and 
looks like some osd's are having 100% cpu utilization?


-----Original Message-----
From: Marc Roos 
Sent: maandag 20 mei 2019 12:42
To: ceph-users
Subject:  cephfs causing high load on vm, taking down 15 min 
later another cephfs vm



I got my first problem with cephfs in a production environment. Is it 
possible from these logfiles to deduct what happened?

svr1 is connected to ceph client network via switch
svr2 vm is collocated on c01 node.
c01 has osd's and the mon.a colocated. 

svr1 was the first to report errors at 03:38:44. I have no error 
messages reported of a network connection problem by any of the ceph 
nodes. I have nothing in dmesg on c01.

[@c01 ~]# cat /etc/redhat-release
CentOS Linux release 7.6.1810 (Core)
[@c01 ~]# uname -a
Linux c01 3.10.0-957.10.1.el7.x86_64 #1 SMP Mon Mar 18 15:06:45 UTC 2019 

x86_64 x86_64 x86_64 GNU/Linux
[@c01 ~]# ceph versions
{
    "mon": {
        "ceph version 12.2.12 (1436006594665279fe734b4c15d7e08c13ebd777) 

luminous (stable)": 3
    },
    "mgr": {
        "ceph version 12.2.12 (1436006594665279fe734b4c15d7e08c13ebd777) 

luminous (stable)": 3
    },
    "osd": {
        "ceph version 12.2.12 (1436006594665279fe734b4c15d7e08c13ebd777) 

luminous (stable)": 32
    },
    "mds": {
        "ceph version 12.2.12 (1436006594665279fe734b4c15d7e08c13ebd777) 

luminous (stable)": 2
    },
    "rgw": {
        "ceph version 12.2.12 (1436006594665279fe734b4c15d7e08c13ebd777) 

luminous (stable)": 2
    },
    "overall": {
        "ceph version 12.2.12 (1436006594665279fe734b4c15d7e08c13ebd777) 

luminous (stable)": 42
    }
}




[0] svr1 messages 
May 20 03:36:01 svr1 systemd: Started Session 308978 of user root.
May 20 03:36:01 svr1 systemd: Started Session 308979 of user root.
May 20 03:36:01 svr1 systemd: Started Session 308979 of user root.
May 20 03:36:01 svr1 systemd: Started Session 308980 of user root.
May 20 03:36:01 svr1 systemd: Started Session 308980 of user root.
May 20 03:38:01 svr1 systemd: Started Session 308981 of user root.
May 20 03:38:01 svr1 systemd: Started Session 308981 of user root.
May 20 03:38:01 svr1 systemd: Started Session 308982 of user root.
May 20 03:38:01 svr1 systemd: Started Session 308982 of user root.
May 20 03:38:01 svr1 systemd: Started Session 308983 of user root.
May 20 03:38:01 svr1 systemd: Started Session 308983 of user root.
May 20 03:38:44 svr1 kernel: libceph: osd0 192.168.x.111:6814 io error
May 20 03:38:44 svr1 kernel: libceph: osd0 192.168.x.111:6814 io error
May 20 03:38:45 svr1 kernel: last message repeated 5 times
May 20 03:38:45 svr1 kernel: libceph: mon0 192.168.x.111:6789 io error
May 20 03:38:45 svr1 kernel: libceph: mon0 192.168.x.111:6789 session 
lost, hunting for new mon
May 20 03:38:45 svr1 kernel: last message repeated 5 times
May 20 03:38:45 svr1 kernel: libceph: mon0 192.168.x.111:6789 io error
May 20 03:38:45 svr1 kernel: libceph: mon0 192.168.x.111:6789 session 
lost, hunting for new mon
May 20 03:38:45 svr1 kernel: libceph: mon1 192.168.x.112:6789 session 
established
May 20 03:38:45 svr1 kernel: libceph: mon1 192.168.x.112:6789 session 
established
May 20 03:38:45 svr1 kernel: libceph: osd0 192.168.x.111:6814 io error
May 20 03:38:45 svr1 kernel: libceph: osd0 192.168.x.111:6814 io error
May 20 03:38:45 svr1 kernel: libceph: mon1 192.168.x.112:6789 io error
May 20 03:38:45 svr1 kernel: libceph: mon1 192.168.x.112:6789 session 
lost, hunting for new mon
May 20 03:38:45 svr1 kernel: libceph: mon1 192.168.x.112:6789 io error
May 20 03:38:45 svr1 kernel: libceph: mon1 192.168.x.112:6789 session 
lost, hunting for new mon
May 20 03:38:45 svr1 kernel: libceph: mon0 192.168.x.111:6789 session 
established
May 20 03:38:45 svr1 kernel: libceph: mon0 192.168.x.111:6789 session 
established
May 20 03:38:45 svr1 kernel: libceph: mon0 192.168.x.111:6789 io error
May 20 03:38:45 svr1 kernel: libceph: mon0 192.168.x.111:6789 session 
lost, hunting for new mon
May 20 03:38:45 svr1 kernel: libceph: mon0 192.168.x.111:6789 io error
May 20 03:38:45 svr1 kernel: libceph: mon0 192.168.x.111:6789 session 
lost, hunting for new mon
May 20 03:38:45 svr1 kernel: libceph: mon2 192.168.x.113:6789 session 
established
May 20 03:38:45 svr1 kernel: libceph: mon2 192.168.x.113:6789 session 
established
May 20 03:38:45 svr1 kernel: libceph: osd0 192.168.x.111:6814 io error
May 20 03:38:45 svr1 kernel: libceph: osd0 192.168.x.111:6814 io error
May 20 03:38:45 svr1 kernel: libceph: mon2 192.168.x.113:6789 io error
May 20 03:38:45 svr1 kernel: libceph: mon2 192.168.x.113:6789 session 
lost, hunting for new mon
May 20 03:38:45 svr1 kernel: libceph: mon2 192.168.x.113:6789 io error
May 20 03:38:45 svr1 kernel: libceph: mon2 192.168.x.113:6789 session 
lost, hunting for new mon
May 20 03:38:45 svr1 kernel: libceph: mon0 192.168.x.111:6789 session 
established
May 20 03:38:45 svr1 kernel: libceph: mon0 192.168.x.111:6789 io error
May 20 03:38:45 svr1 kernel: libceph: mon0 192.168.x.111:6789 session 
lost, hunting for new mon
May 20 03:38:45 svr1 kernel: libceph: mon0 192.168.x.111:6789 session 
established


[1] svr2 messages 
May 20 03:40:01 svr2 systemd: Stopping User Slice of root.
May 20 03:40:01 svr2 systemd: Removed slice User Slice of root.
May 20 03:40:01 svr2 systemd: Stopping User Slice of root.
May 20 03:50:01 svr2 systemd: Created slice User Slice of root.
May 20 03:50:01 svr2 systemd: Created slice User Slice of root.
May 20 03:50:01 svr2 systemd: Starting User Slice of root.
May 20 03:50:01 svr2 systemd: Starting User Slice of root.
May 20 03:50:01 svr2 systemd: Started Session 9442 of user root.
May 20 03:50:01 svr2 systemd: Started Session 9442 of user root.
May 20 03:50:01 svr2 systemd: Starting Session 9442 of user root.
May 20 03:50:01 svr2 systemd: Starting Session 9442 of user root.
May 20 03:50:01 svr2 systemd: Removed slice User Slice of root.
May 20 03:50:01 svr2 systemd: Removed slice User Slice of root.
May 20 03:50:01 svr2 systemd: Stopping User Slice of root.
May 20 03:50:01 svr2 systemd: Stopping User Slice of root.
May 20 03:53:37 svr2 kernel: libceph: osd9 192.168.x.112:6812 io error
May 20 03:53:37 svr2 kernel: libceph: osd9 192.168.x.112:6812 io error
May 20 03:53:37 svr2 kernel: last message repeated 3 times
May 20 03:53:37 svr2 kernel: libceph: mon1 192.168.x.112:6789 io error
May 20 03:53:37 svr2 kernel: last message repeated 3 times
May 20 03:53:37 svr2 kernel: libceph: mon1 192.168.x.112:6789 io error
May 20 03:53:37 svr2 kernel: libceph: mon1 192.168.x.112:6789 session 
lost, hunting for new mon
May 20 03:53:37 svr2 kernel: libceph: mon1 192.168.x.112:6789 session 
lost, hunting for new mon
May 20 03:53:37 svr2 kernel: libceph: mon2 192.168.x.113:6789 session 
established
May 20 03:53:37 svr2 kernel: libceph: mon2 192.168.x.113:6789 session 
established
May 20 03:53:37 svr2 kernel: libceph: mon2 192.168.x.113:6789 io error
May 20 03:53:37 svr2 kernel: libceph: mon2 192.168.x.113:6789 session 
lost, hunting for new mon
May 20 03:53:37 svr2 kernel: libceph: mon2 192.168.x.113:6789 io error
May 20 03:53:37 svr2 kernel: libceph: mon2 192.168.x.113:6789 session 
lost, hunting for new mon
May 20 03:53:37 svr2 kernel: libceph: osd9 192.168.x.112:6812 io error
May 20 03:53:37 svr2 kernel: libceph: osd9 192.168.x.112:6812 io error
May 20 03:53:37 svr2 kernel: libceph: mon1 192.168.x.112:6789 session 
established
May 20 03:53:37 svr2 kernel: libceph: mon1 192.168.x.112:6789 session 
established
May 20 03:53:37 svr2 kernel: libceph: mon1 192.168.x.112:6789 io error
May 20 03:53:37 svr2 kernel: libceph: mon1 192.168.x.112:6789 session 
lost, hunting for new mon
May 20 03:53:37 svr2 kernel: libceph: mon1 192.168.x.112:6789 io error
May 20 03:53:37 svr2 kernel: libceph: mon1 192.168.x.112:6789 session 
lost, hunting for new mon
May 20 03:53:37 svr2 kernel: libceph: mon0 192.168.x.111:6789 session 
established
May 20 03:53:37 svr2 kernel: libceph: mon0 192.168.x.111:6789 session 
established
May 20 03:53:37 svr2 kernel: libceph: mon0 192.168.x.111:6789 io error
May 20 03:53:37 svr2 kernel: libceph: mon0 192.168.x.111:6789 session 
lost, hunting for new mon
May 20 03:53:37 svr2 kernel: libceph: mon0 192.168.x.111:6789 io error
May 20 03:53:37 svr2 kernel: libceph: mon0 192.168.x.111:6789 session 
lost, hunting for new mon
May 20 03:53:37 svr2 kernel: libceph: mon2 192.168.x.113:6789 session 
established
May 20 03:53:37 svr2 kernel: libceph: mon2 192.168.x.113:6789 session 
established
May 20 03:53:37 svr2 kernel: libceph: mon2 192.168.x.113:6789 io error
May 20 03:53:37 svr2 kernel: libceph: mon2 192.168.x.113:6789 session 
lost, hunting for new mon
May 20 03:53:37 svr2 kernel: libceph: mon2 192.168.x.113:6789 io error
May 20 03:53:37 svr2 kernel: libceph: mon2 192.168.x.113:6789 session 
lost, hunting for new mon

[2] osd.0 log
2019-05-20 03:38:46.358270 7f9208669700  0 -- 192.168.x.111:6814/3478915 

>> 192.168.x.43:0/1827964483 conn(0x55780c19e000 :6814 
s=STATE_ACCEPTING_WAIT_CONNECT_MSG_AUTH pgs=0 cs=0 
l=1).handle_connect_msg accept replacing existing (lossy) channel (new 
one lossy=1)
2019-05-20 03:38:56.155141 7f9208669700  0 -- 192.168.x.111:6814/3478915 

>> 192.168.x.43:0/1827964483 conn(0x55776afb6000 :6814 
s=STATE_ACCEPTING_WAIT_CONNECT_MSG_AUTH pgs=0 cs=0 
l=1).handle_connect_msg accept replacing existing (lossy) channel (new 
one lossy=1)
2019-05-20 03:38:56.476312 7f9208e6a700  0 -- 192.168.x.111:6814/3478915 

>> 192.168.x.43:0/1827964483 conn(0x557797300800 :6814 
s=STATE_ACCEPTING_WAIT_CONNECT_MSG_AUTH pgs=0 cs=0 
l=1).handle_connect_msg accept replacing existing (lossy) channel (new 
one lossy=1)
2019-05-20 03:39:35.050674 7f9208e6a700  0 -- 192.168.x.111:6814/3478915 

>> 192.168.x.43:0/1827964483 conn(0x55784c099000 :6814 
s=STATE_ACCEPTING_WAIT_CONNECT_MSG_AUTH pgs=0 cs=0 
l=1).handle_connect_msg accept replacing existing (lossy) channel (new 
one lossy=1)
2019-05-20 03:41:46.605523 7f9208669700  0 -- 192.168.x.111:6814/3478915 

>> 192.168.x.43:0/1827964483 conn(0x55778dba8000 :6814 
s=STATE_ACCEPTING_WAIT_CONNECT_MSG_AUTH pgs=0 cs=0 
l=1).handle_connect_msg accept replacing existing (lossy) channel (new 
one lossy=1)
2019-05-20 03:42:05.201417 7f9207e68700  0 -- 192.168.x.111:6814/3478915 

>> 192.168.x.112:0/2834749548 conn(0x5578179cf800 :6814 
s=STATE_ACCEPTING_WAIT_CONNECT_MSG_AUTH pgs=0 cs=0 
l=1).handle_connect_msg accept replacing existing (lossy) channel (new 
one lossy=1)
2019-05-20 03:42:18.275703 7f9207e68700  0 -- 192.168.x.111:6814/3478915 

>> 192.168.x.43:0/1827964483 conn(0x557773ccf800 :6814 
s=STATE_ACCEPTING_WAIT_CONNECT_MSG_AUTH pgs=0 cs=0 
l=1).handle_connect_msg accept replacing existing (lossy) channel (new 
one lossy=1)
2019-05-20 03:42:18.493838 7f9208669700  0 -- 192.168.x.111:6814/3478915 

>> 192.168.x.43:0/1827964483 conn(0x557898a90000 :6814 
s=STATE_ACCEPTING_WAIT_CONNECT_MSG_AUTH pgs=0 cs=0 
l=1).handle_connect_msg accept replacing existing (lossy) channel (new 
one lossy=1)
2019-05-20 03:42:18.728962 7f9207e68700  0 -- 192.168.x.111:6814/3478915 

>> 192.168.x.43:0/1827964483 conn(0x55776afba800 :6814 
s=STATE_ACCEPTING_WAIT_CONNECT_MSG_AUTH pgs=0 cs=0 
l=1).handle_connect_msg accept replacing existing (lossy) channel (new 
one lossy=1)
2019-05-20 03:42:19.242145 7f9208e6a700  0 -- 192.168.x.111:6814/3478915 

>> 192.168.x.43:0/1827964483 conn(0x557762c8f000 :6814 
s=STATE_ACCEPTING_WAIT_CONNECT_MSG_AUTH pgs=0 cs=0 
l=1).handle_connect_msg accept replacing existing (lossy) channel (new 
one lossy=1)
2019-05-20 03:43:41.492125 7f9207e68700  0 -- 192.168.x.111:6814/3478915 

>> 192.168.x.43:0/1827964483 conn(0x5577aa28d800 :6814 
s=STATE_ACCEPTING_WAIT_CONNECT_MSG_AUTH pgs=0 cs=0 
l=1).handle_connect_msg accept replacing existing (lossy) channel (new 
one lossy=1)
2019-05-20 03:45:40.006405 7f9208e6a700  0 -- 192.168.x.111:6814/3478915 

>> 192.168.x.43:0/1827964483 conn(0x557778d1d800 :6814 
s=STATE_ACCEPTING_WAIT_CONNECT_MSG_AUTH pgs=0 cs=0 
l=1).handle_connect_msg accept replacing existing (lossy) channel (new 
one lossy=1)
2019-05-20 03:45:40.736819 7f9208e6a700  0 -- 192.168.x.111:6814/3478915 

>> 192.168.x.43:0/1827964483 conn(0x5577a4224800 :6814 
s=STATE_ACCEPTING_WAIT_CONNECT_MSG_AUTH pgs=0 cs=0 
l=1).handle_connect_msg accept replacing existing (lossy) channel (new 
one lossy=1)
2019-05-20 03:47:08.368138 7f9207e68700  0 -- 192.168.x.111:6814/3478915 

>> 192.168.x.43:0/1827964483 conn(0x557778d1d800 :6814 
s=STATE_ACCEPTING_WAIT_CONNECT_MSG_AUTH pgs=0 cs=0 
l=1).handle_connect_msg accept replacing existing (lossy) channel (new 
one lossy=1)
2019-05-20 03:48:24.848331 7f9208669700  0 -- 192.168.x.111:6814/3478915 

>> 192.168.x.43:0/1827964483 conn(0x5577f0819800 :6814 
s=STATE_ACCEPTING_WAIT_CONNECT_MSG_AUTH pgs=0 cs=0 
l=1).handle_connect_msg accept replacing existing (lossy) channel (new 
one lossy=1)
2019-05-20 03:50:44.442386 7f9208e6a700  0 -- 192.168.x.111:6814/3478915 

>> 192.168.x.43:0/1827964483 conn(0x55777ef3c000 :6814 
s=STATE_ACCEPTING_WAIT_CONNECT_MSG_AUTH pgs=0 cs=0 
l=1).handle_connect_msg accept replacing existing (lossy) channel (new 
one lossy=1)
2019-05-20 03:51:11.352119 7f9208669700  0 -- 192.168.x.111:6814/3478915 

>> 192.168.x.43:0/1827964483 conn(0x55779e445000 :6814 
s=STATE_ACCEPTING_WAIT_CONNECT_MSG_AUTH pgs=0 cs=0 
l=1).handle_connect_msg accept replacing existing (lossy) channel (new 
one lossy=1)
2019-05-20 03:51:18.615690 7f9207e68700  0 -- 192.168.x.111:6814/3478915 

>> 192.168.x.43:0/1827964483 conn(0x55779e445000 :6814 
s=STATE_ACCEPTING_WAIT_CONNECT_MSG_AUTH pgs=0 cs=0 
l=1).handle_connect_msg accept replacing existing (lossy) channel (new 
one lossy=1)
2019-05-20 03:51:57.887069 7f9207e68700  0 -- 192.168.x.111:6814/3478915 

>> 192.168.x.43:0/1827964483 conn(0x55777ef3c000 :6814 
s=STATE_ACCEPTING_WAIT_CONNECT_MSG_AUTH pgs=0 cs=0 
l=1).handle_connect_msg accept replacing existing (lossy) channel (new 
one lossy=1)
2019-05-20 03:51:58.109173 7f9208669700  0 -- 192.168.x.111:6814/3478915 

>> 192.168.x.43:0/1827964483 conn(0x557769206000 :6814 
s=STATE_ACCEPTING_WAIT_CONNECT_MSG_AUTH pgs=0 cs=0 
l=1).handle_connect_msg accept replacing existing (lossy) channel (new 
one lossy=1)
2019-05-20 03:51:58.364811 7f9207e68700  0 -- 192.168.x.111:6814/3478915 

>> 192.168.x.43:0/1827964483 conn(0x5577aa28d800 :6814 
s=STATE_ACCEPTING_WAIT_CONNECT_MSG_AUTH pgs=0 cs=0 
l=1).handle_connect_msg accept replacing existing (lossy) channel (new 
one lossy=1)
2019-05-20 03:51:59.323286 7f9208e6a700  0 -- 192.168.x.111:6814/3478915 

>> 192.168.x.43:0/1827964483 conn(0x557773cce000 :6814 
s=STATE_ACCEPTING_WAIT_CONNECT_MSG_AUTH pgs=0 cs=0 
l=1).handle_connect_msg accept replacing existing (lossy) channel (new 
one lossy=1)
2019-05-20 03:57:22.060831 7f9208669700  0 -- 192.168.x.111:6814/3478915 

>> 192.168.x.43:0/1827964483 conn(0x55786cf37800 :6814 
s=STATE_ACCEPTING_WAIT_CONNECT_MSG_AUTH pgs=0 cs=0 
l=1).handle_connect_msg accept replacing existing (lossy) channel (new 
one lossy=1)
2019-05-20 03:57:48.793125 7f9207e68700  0 -- 192.168.x.111:6814/3478915 

>> 192.168.x.114:0/4006640003 conn(0x5577972ff000 :6814 
s=STATE_ACCEPTING_WAIT_CONNECT_MSG_AUTH pgs=0 cs=0 
l=1).handle_connect_msg accept replacing existing (lossy) channel (new 
one lossy=1)
2019-05-20 04:07:07.135252 7f9207e68700  0 -- 192.168.x.111:6814/3478915 

>> 192.168.x.43:0/1827964483 conn(0x55779b319000 :6814 
s=STATE_ACCEPTING_WAIT_CONNECT_MSG_AUTH pgs=0 cs=0 
l=1).handle_connect_msg accept replacing existing (lossy) channel (new 
one lossy=1)
2019-05-20 04:07:07.394734 7f9208669700  0 -- 192.168.x.111:6814/3478915 

>> 192.168.x.43:0/1827964483 conn(0x55779529e000 :6814 
s=STATE_ACCEPTING_WAIT_CONNECT_MSG_AUTH pgs=0 cs=0 
l=1).handle_connect_msg accept replacing existing (lossy) channel (new 
one lossy=1)


[3] mon.a log
2019-05-20 03:36:12.418662 7f1b08e2b700  4 rocksdb: EVENT_LOG_v1 
{"time_micros": 1558316172418658, "job": 57455, "event": 
"table_file_deletion", "file_number": 1788761}
2019-05-20 03:36:12.431694 7f1b08e2b700  4 rocksdb: EVENT_LOG_v1 
{"time_micros": 1558316172431690, "job": 57455, "event": 
"table_file_deletion", "file_number": 1788760}
2019-05-20 03:36:12.444811 7f1b08e2b700  4 rocksdb: EVENT_LOG_v1 
{"time_micros": 1558316172444808, "job": 57455, "event": 
"table_file_deletion", "file_number": 1788759}
2019-05-20 03:36:12.458658 7f1b08e2b700  4 rocksdb: EVENT_LOG_v1 
{"time_micros": 1558316172458654, "job": 57455, "event": 
"table_file_deletion", "file_number": 1788758}
2019-05-20 03:36:12.472801 7f1b08e2b700  4 rocksdb: EVENT_LOG_v1 
{"time_micros": 1558316172472797, "job": 57455, "event": 
"table_file_deletion", "file_number": 1788757}
2019-05-20 03:36:12.487007 7f1b08e2b700  4 rocksdb: EVENT_LOG_v1 
{"time_micros": 1558316172486995, "job": 57455, "event": 
"table_file_deletion", "file_number": 1788756}
2019-05-20 03:36:12.487096 7f1b08e2b700  4 rocksdb: (Original Log Time 
2019/05/20-03:36:12.487089) 
[/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_AR
CH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/
12.2.12/rpm/el7/BUILD/ceph-12.2.12/src/rocksdb/db/db_impl_compaction_flu
sh.cc:1456] Compaction nothing to do
2019-05-20 03:36:15.272950 7f1b0a62e700  1 mon.a@0(leader).osd e64430 
e64430: 32 total, 32 up, 32 in
2019-05-20 03:36:15.298861 7f1b0a62e700  0 log_channel(cluster) log 
[DBG] : osdmap e64430: 32 total, 32 up, 32 in
2019-05-20 03:36:17.013839 7f1b1163c700  0 
mon.a@0(leader).data_health(7108) update_stats avail 91% total 50.0GiB, 
used 4.05GiB, avail 45.9GiB
2019-05-20 03:36:19.738495 7f1b0a62e700  1 mon.a@0(leader).osd e64431 
e64431: 32 total, 32 up, 32 in
2019-05-20 03:36:19.765029 7f1b0a62e700  0 log_channel(cluster) log 
[DBG] : osdmap e64431: 32 total, 32 up, 32 in
2019-05-20 03:36:25.302195 7f1b0a62e700  1 mon.a@0(leader).osd e64432 
e64432: 32 total, 32 up, 32 in
2019-05-20 03:36:25.328639 7f1b0a62e700  0 log_channel(cluster) log 
[DBG] : osdmap e64432: 32 total, 32 up, 32 in
2019-05-20 03:36:29.741140 7f1b0a62e700  1 mon.a@0(leader).osd e64433 
e64433: 32 total, 32 up, 32 in
2019-05-20 03:36:29.768034 7f1b0a62e700  0 log_channel(cluster) log 
[DBG] : osdmap e64433: 32 total, 32 up, 32 in
2019-05-20 03:36:35.343530 7f1b0a62e700  1 mon.a@0(leader).osd e64434 
e64434: 32 total, 32 up, 32 in
2019-05-20 03:36:35.370266 7f1b0a62e700  0 log_channel(cluster) log 
[DBG] : osdmap e64434: 32 total, 32 up, 32 in
2019-05-20 03:36:39.738502 7f1b0a62e700  1 mon.a@0(leader).osd e64435 
e64435: 32 total, 32 up, 32 in
2019-05-20 03:36:39.765465 7f1b0a62e700  0 log_channel(cluster) log 
[DBG] : osdmap e64435: 32 total, 32 up, 32 in
2019-05-20 03:36:45.371359 7f1b0a62e700  1 mon.a@0(leader).osd e64436 
e64436: 32 total, 32 up, 32 in
2019-05-20 03:36:45.398034 7f1b0a62e700  0 log_channel(cluster) log 
[DBG] : osdmap e64436: 32 total, 32 up, 32 in
2019-05-20 03:36:49.738038 7f1b0a62e700  1 mon.a@0(leader).osd e64437 
e64437: 32 total, 32 up, 32 in
2019-05-20 03:36:49.765428 7f1b0a62e700  0 log_channel(cluster) log 
[DBG] : osdmap e64437: 32 total, 32 up, 32 in
2019-05-20 03:36:55.418010 7f1b0a62e700  1 mon.a@0(leader).osd e64438 
e64438: 32 total, 32 up, 32 in
2019-05-20 03:36:55.446785 7f1b0a62e700  0 log_channel(cluster) log 
[DBG] : osdmap e64438: 32 total, 32 up, 32 in
2019-05-20 03:36:59.738000 7f1b0a62e700  1 mon.a@0(leader).osd e64439 
e64439: 32 total, 32 up, 32 in
2019-05-20 03:36:59.765632 7f1b0a62e700  0 log_channel(cluster) log 
[DBG] : osdmap e64439: 32 total, 32 up, 32 in
2019-05-20 03:37:05.423528 7f1b0a62e700  1 mon.a@0(leader).osd e64440 
e64440: 32 total, 32 up, 32 in
2019-05-20 03:37:05.449968 7f1b0a62e700  0 log_channel(cluster) log 
[DBG] : osdmap e64440: 32 total, 32 up, 32 in
2019-05-20 03:37:06.513317 7f1b0a62e700  1 mon.a@0(leader).osd e64441 
e64441: 32 total, 32 up, 32 in
2019-05-20 03:37:06.539643 7f1b0a62e700  0 log_channel(cluster) log 
[DBG] : osdmap e64441: 32 total, 32 up, 32 in
2019-05-20 03:37:09.851351 7f1b0a62e700  1 mon.a@0(leader).osd e64442 
e64442: 32 total, 32 up, 32 in
2019-05-20 03:37:09.877349 7f1b0a62e700  0 log_channel(cluster) log 
[DBG] : osdmap e64442: 32 total, 32 up, 32 in
2019-05-20 03:37:15.446739 7f1b0a62e700  1 mon.a@0(leader).osd e64443 
e64443: 32 total, 32 up, 32 in
2019-05-20 03:37:15.447538 7f1b0a62e700  4 rocksdb: 
[/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_AR
CH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/
12.2.12/rpm/el7/BUILD/ceph-12.2.12/src/rocksdb/db/db_impl_write.cc:725] 
[default] New memtable created with log file: #1788785. Immutable 
memtables: 0.
2019-05-20 03:37:15.470599 7f1b0962c700  4 rocksdb: (Original Log Time 
2019/05/20-03:37:15.447607) 
[/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_AR
CH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/
12.2.12/rpm/el7/BUILD/ceph-12.2.12/src/rocksdb/db/db_impl_compaction_flu
sh.cc:1158] Calling FlushMemTableToOutputFile with column family 
[default], flush slots available 1, compaction slots allowed 1, 
compaction slots scheduled 1
2019-05-20 03:37:15.470613 7f1b0962c700  4 rocksdb: 
[/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_AR
CH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/
12.2.12/rpm/el7/BUILD/ceph-12.2.12/src/rocksdb/db/flush_job.cc:264] 
[default] [JOB 57457] Flushing memtable with next logfile: 1788785
2019-05-20 03:37:15.470629 7f1b0962c700  4 rocksdb: EVENT_LOG_v1 
{"time_micros": 1558316235470621, "job": 57457, "event": 
"flush_started", "num_memtables": 1, "num_entries": 550, "num_deletes": 
0, "memory_usage": 32932624}
2019-05-20 03:37:15.470632 7f1b0962c700  4 rocksdb: 
[/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_AR
CH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/
12.2.12/rpm/el7/BUILD/ceph-12.2.12/src/rocksdb/db/flush_job.cc:293] 
[default] [JOB 57457] Level-0 flush table #1788786: started
2019-05-20 03:37:15.474323 7f1b0a62e700  0 log_channel(cluster) log 
[DBG] : osdmap e64443: 32 total, 32 up, 32 in
2019-05-20 03:37:15.622750 7f1b0962c700  4 rocksdb: EVENT_LOG_v1 
{"time_micros": 1558316235622730, "cf_name": "default", "job": 57457, 
"event": "table_file_creation", "file_number": 1788786, "file_size": 
29021225, "table_properties": {"data_size": 28956685, "index_size": 
5597, "filter_size": 58018, "raw_key_size": 4239, 
"raw_average_key_size": 22, "raw_value_size": 28949846, 
"raw_average_value_size": 155644, "num_data_blocks": 158, "num_entries": 

186, "filter_policy_name": "rocksdb.BuiltinBloomFilter", "kDeletedKeys": 

"0", "kMergeOperands": "0"}}
2019-05-20 03:37:15.622780 7f1b0962c700  4 rocksdb: 
[/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_AR
CH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/
12.2.12/rpm/el7/BUILD/ceph-12.2.12/src/rocksdb/db/flush_job.cc:319] 
[default] [JOB 57457] Level-0 flush table #1788786: 29021225 bytes OK
2019-05-20 03:37:15.645271 7f1b0962c700  4 rocksdb: (Original Log Time 
2019/05/20-03:37:15.622795) 
[/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_AR
CH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/
12.2.12/rpm/el7/BUILD/ceph-12.2.12/src/rocksdb/db/memtable_list.cc:360] 
[default] Level-0 commit table #1788786 started
2019-05-20 03:37:15.645286 7f1b0962c700  4 rocksdb: (Original Log Time 
2019/05/20-03:37:15.645176) 
[/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_AR
CH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/
12.2.12/rpm/el7/BUILD/ceph-12.2.12/src/rocksdb/db/memtable_list.cc:383] 
[default] Level-0 commit table #1788786: memtable #1 done
2019-05-20 03:37:15.645297 7f1b0962c700  4 rocksdb: (Original Log Time 
2019/05/20-03:37:15.645212) EVENT_LOG_v1 {"time_micros": 
1558316235645199, "job": 57457, "event": "flush_finished", "lsm_state": 
[1, 0, 0, 0, 0, 0, 19], "immutable_memtables": 0}
2019-05-20 03:37:15.645302 7f1b0962c700  4 rocksdb: (Original Log Time 
2019/05/20-03:37:15.645244) 
[/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_AR
CH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/
12.2.12/rpm/el7/BUILD/ceph-12.2.12/src/rocksdb/db/db_impl_compaction_flu
sh.cc:132] [default] Level summary: base level 5 max bytes base 69194285 

files[1 0 0 0 0 0 19] max score 0.25
2019-05-20 03:37:15.645318 7f1b0962c700  4 rocksdb: 
[/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_AR
CH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/
12.2.12/rpm/el7/BUILD/ceph-12.2.12/src/rocksdb/db/db_impl_files.cc:388] 
[JOB 57457] Try to delete WAL files size 28975051, prev total WAL file 
size 28975092, number of live WAL files 2.
2019-05-20 03:37:16.596003 7f1b0a62e700  1 mon.a@0(leader).osd e64444 
e64444: 32 total, 32 up, 32 in
2019-05-20 03:37:16.621741 7f1b0a62e700  0 log_channel(cluster) log 
[DBG] : osdmap e64444: 32 total, 32 up, 32 in
2019-05-20 03:37:17.014200 7f1b1163c700  0 
mon.a@0(leader).data_health(7108) update_stats avail 91% total 50.0GiB, 
used 4.08GiB, avail 45.9GiB
2019-05-20 03:37:19.822194 7f1b0a62e700  1 mon.a@0(leader).osd e64445 
e64445: 32 total, 32 up, 32 in
2019-05-20 03:37:19.848514 7f1b0a62e700  0 log_channel(cluster) log 
[DBG] : osdmap e64445: 32 total, 32 up, 32 in
2019-05-20 03:37:25.473280 7f1b0a62e700  1 mon.a@0(leader).osd e64446 
e64446: 32 total, 32 up, 32 in
2019-05-20 03:37:25.499555 7f1b0a62e700  0 log_channel(cluster) log 
[DBG] : osdmap e64446: 32 total, 32 up, 32 in
2019-05-20 03:37:26.630462 7f1b0a62e700  1 mon.a@0(leader).osd e64447 
e64447: 32 total, 32 up, 32 in
2019-05-20 03:37:26.656497 7f1b0a62e700  0 log_channel(cluster) log 
[DBG] : osdmap e64447: 32 total, 32 up, 32 in
2019-05-20 03:37:29.856077 7f1b0a62e700  1 mon.a@0(leader).osd e64448 
e64448: 32 total, 32 up, 32 in
2019-05-20 03:37:29.882033 7f1b0a62e700  0 log_channel(cluster) log 
[DBG] : osdmap e64448: 32 total, 32 up, 32 in
2019-05-20 03:37:35.503666 7f1b0a62e700  1 mon.a@0(leader).osd e64449 
e64449: 32 total, 32 up, 32 in
2019-05-20 03:37:35.535606 7f1b0a62e700  0 log_channel(cluster) log 
[DBG] : osdmap e64449: 32 total, 32 up, 32 in
2019-05-20 03:37:36.662389 7f1b0a62e700  1 mon.a@0(leader).osd e64450 
e64450: 32 total, 32 up, 32 in
2019-05-20 03:37:36.688901 7f1b0a62e700  0 log_channel(cluster) log 
[DBG] : osdmap e64450: 32 total, 32 up, 32 in
2019-05-20 03:37:39.889782 7f1b0a62e700  1 mon.a@0(leader).osd e64451 
e64451: 32 total, 32 up, 32 in
2019-05-20 03:37:39.916350 7f1b0a62e700  0 log_channel(cluster) log 
[DBG] : osdmap e64451: 32 total, 32 up, 32 in
2019-05-20 03:37:45.534370 7f1b0a62e700  1 mon.a@0(leader).osd e64452 
e64452: 32 total, 32 up, 32 in
2019-05-20 03:37:45.561000 7f1b0a62e700  0 log_channel(cluster) log 
[DBG] : osdmap e64452: 32 total, 32 up, 32 in
2019-05-20 03:37:49.755239 7f1b0a62e700  1 mon.a@0(leader).osd e64453 
e64453: 32 total, 32 up, 32 in
2019-05-20 03:37:49.782078 7f1b0a62e700  0 log_channel(cluster) log 
[DBG] : osdmap e64453: 32 total, 32 up, 32 in
2019-05-20 03:37:50.904926 7f1b0a62e700  1 mon.a@0(leader).osd e64454 
e64454: 32 total, 32 up, 32 in
2019-05-20 03:37:50.930940 7f1b0a62e700  0 log_channel(cluster) log 
[DBG] : osdmap e64454: 32 total, 32 up, 32 in
2019-05-20 03:37:55.567119 7f1b0a62e700  1 mon.a@0(leader).osd e64455 
e64455: 32 total, 32 up, 32 in
2019-05-20 03:37:55.594617 7f1b0a62e700  0 log_channel(cluster) log 
[DBG] : osdmap e64455: 32 total, 32 up, 32 in
2019-05-20 03:37:59.736903 7f1b0a62e700  1 mon.a@0(leader).osd e64456 
e64456: 32 total, 32 up, 32 in
2019-05-20 03:37:59.763752 7f1b0a62e700  0 log_channel(cluster) log 
[DBG] : osdmap e64456: 32 total, 32 up, 32 in
2019-05-20 03:38:05.584960 7f1b0a62e700  1 mon.a@0(leader).osd e64457 
e64457: 32 total, 32 up, 32 in
2019-05-20 03:38:05.616558 7f1b0a62e700  0 log_channel(cluster) log 
[DBG] : osdmap e64457: 32 total, 32 up, 32 in
2019-05-20 03:38:09.736670 7f1b0a62e700  1 mon.a@0(leader).osd e64458 
e64458: 32 total, 32 up, 32 in
2019-05-20 03:38:09.762625 7f1b0a62e700  0 log_channel(cluster) log 
[DBG] : osdmap e64458: 32 total, 32 up, 32 in
2019-05-20 03:38:15.617648 7f1b0a62e700  1 mon.a@0(leader).osd e64459 
e64459: 32 total, 32 up, 32 in
2019-05-20 03:38:15.618422 7f1b0a62e700  4 rocksdb: 
[/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_AR
CH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/
12.2.12/rpm/el7/BUILD/ceph-12.2.12/src/rocksdb/db/db_impl_write.cc:725] 
[default] New memtable created with log file: #1788787. Immutable 
memtables: 0.
2019-05-20 03:38:15.641025 7f1b0962c700  4 rocksdb: (Original Log Time 
2019/05/20-03:38:15.618497) 
[/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_AR
CH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/
12.2.12/rpm/el7/BUILD/ceph-12.2.12/src/rocksdb/db/db_impl_compaction_flu
sh.cc:1158] Calling FlushMemTableToOutputFile with column family 
[default], flush slots available 1, compaction slots allowed 1, 
compaction slots scheduled 1
2019-05-20 03:38:15.641035 7f1b0962c700  4 rocksdb: 
[/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_AR
CH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/
12.2.12/rpm/el7/BUILD/ceph-12.2.12/src/rocksdb/db/flush_job.cc:264] 
[default] [JOB 57458] Flushing memtable with next logfile: 1788787
2019-05-20 03:38:15.641050 7f1b0962c700  4 rocksdb: EVENT_LOG_v1 
{"time_micros": 1558316295641042, "job": 57458, "event": 
"flush_started", "num_memtables": 1, "num_entries": 522, "num_deletes": 
0, "memory_usage": 32931384}
2019-05-20 03:38:15.641054 7f1b0962c700  4 rocksdb: 
[/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_AR
CH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/
12.2.12/rpm/el7/BUILD/ceph-12.2.12/src/rocksdb/db/flush_job.cc:293] 
[default] [JOB 57458] Level-0 flush table #1788788: started
2019-05-20 03:38:15.644891 7f1b0a62e700  0 log_channel(cluster) log 
[DBG] : osdmap e64459: 32 total, 32 up, 32 in
2019-05-20 03:38:15.786077 7f1b0962c700  4 rocksdb: EVENT_LOG_v1 
{"time_micros": 1558316295786051, "cf_name": "default", "job": 57458, 
"event": "table_file_creation", "file_number": 1788788, "file_size": 
28905705, "table_properties": {"data_size": 28841776, "index_size": 
5290, "filter_size": 57714, "raw_key_size": 4057, 
"raw_average_key_size": 22, "raw_value_size": 28835274, 
"raw_average_value_size": 161995, "num_data_blocks": 149, "num_entries": 

178, "filter_policy_name": "rocksdb.BuiltinBloomFilter", "kDeletedKeys": 

"0", "kMergeOperands": "0"}}
2019-05-20 03:38:15.786102 7f1b0962c700  4 rocksdb: 
[/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_AR
CH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/
12.2.12/rpm/el7/BUILD/ceph-12.2.12/src/rocksdb/db/flush_job.cc:319] 
[default] [JOB 57458] Level-0 flush table #1788788: 28905705 bytes OK
2019-05-20 03:38:15.808704 7f1b0962c700  4 rocksdb: (Original Log Time 
2019/05/20-03:38:15.786117) 
[/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_AR
CH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/
12.2.12/rpm/el7/BUILD/ceph-12.2.12/src/rocksdb/db/memtable_list.cc:360] 
[default] Level-0 commit table #1788788 started
2019-05-20 03:38:15.808721 7f1b0962c700  4 rocksdb: (Original Log Time 
2019/05/20-03:38:15.808602) 
[/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_AR
CH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/
12.2.12/rpm/el7/BUILD/ceph-12.2.12/src/rocksdb/db/memtable_list.cc:383] 
[default] Level-0 commit table #1788788: memtable #1 done
2019-05-20 03:38:15.808726 7f1b0962c700  4 rocksdb: (Original Log Time 
2019/05/20-03:38:15.808642) EVENT_LOG_v1 {"time_micros": 
1558316295808628, "job": 57458, "event": "flush_finished", "lsm_state": 
[2, 0, 0, 0, 0, 0, 19], "immutable_memtables": 0}
2019-05-20 03:38:15.808861 7f1b0962c700  4 rocksdb: (Original Log Time 
2019/05/20-03:38:15.808676) 
[/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_AR
CH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/
12.2.12/rpm/el7/BUILD/ceph-12.2.12/src/rocksdb/db/db_impl_compaction_flu
sh.cc:132] [default] Level summary: base level 5 max bytes base 69194285 

files[2 0 0 0 0 0 19] max score 0.50
2019-05-20 03:38:15.808894 7f1b0962c700  4 rocksdb: 
[/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_AR
CH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/
12.2.12/rpm/el7/BUILD/ceph-12.2.12/src/rocksdb/db/db_impl_files.cc:388] 
[JOB 57458] Try to delete WAL files size 28859118, prev total WAL file 
size 28859159, number of live WAL files 2.
2019-05-20 03:38:17.014616 7f1b1163c700  0 
mon.a@0(leader).data_health(7108) update_stats avail 91% total 50.0GiB, 
used 4.11GiB, avail 45.9GiB
2019-05-20 03:38:19.736714 7f1b0a62e700  1 mon.a@0(leader).osd e64460 
e64460: 32 total, 32 up, 32 in
2019-05-20 03:38:19.762553 7f1b0a62e700  0 log_channel(cluster) log 
[DBG] : osdmap e64460: 32 total, 32 up, 32 in
2019-05-20 03:38:25.640142 7f1b0a62e700  1 mon.a@0(leader).osd e64461 
e64461: 32 total, 32 up, 32 in
2019-05-20 03:38:25.666382 7f1b0a62e700  0 log_channel(cluster) log 
[DBG] : osdmap e64461: 32 total, 32 up, 32 in
2019-05-20 03:38:26.728128 7f1b0a62e700  1 mon.a@0(leader).osd e64462 
e64462: 32 total, 32 up, 32 in
2019-05-20 03:38:26.753754 7f1b0a62e700  0 log_channel(cluster) log 
[DBG] : osdmap e64462: 32 total, 32 up, 32 in
2019-05-20 03:38:27.930914 7f1b0a62e700  1 mon.a@0(leader).osd e64463 
e64463: 32 total, 32 up, 32 in
2019-05-20 03:38:27.956420 7f1b0a62e700  0 log_channel(cluster) log 
[DBG] : osdmap e64463: 32 total, 32 up, 32 in
2019-05-20 03:38:30.271955 7f1b0a62e700  1 mon.a@0(leader).osd e64464 
e64464: 32 total, 32 up, 32 in
2019-05-20 03:38:30.297712 7f1b0a62e700  0 log_channel(cluster) log 
[DBG] : osdmap e64464: 32 total, 32 up, 32 in
2019-05-20 03:38:35.711055 7f1b0a62e700  1 mon.a@0(leader).osd e64465 
e64465: 32 total, 32 up, 32 in
2019-05-20 03:38:35.737482 7f1b0a62e700  0 log_channel(cluster) log 
[DBG] : osdmap e64465: 32 total, 32 up, 32 in
2019-05-20 03:38:36.863809 7f1b0a62e700  1 mon.a@0(leader).osd e64466 
e64466: 32 total, 32 up, 32 in
2019-05-20 03:38:36.890569 7f1b0a62e700  0 log_channel(cluster) log 
[DBG] : osdmap e64466: 32 total, 32 up, 32 in
2019-05-20 03:38:40.090679 7f1b0a62e700  1 mon.a@0(leader).osd e64467 
e64467: 32 total, 32 up, 32 in
2019-05-20 03:38:40.116896 7f1b0a62e700  0 log_channel(cluster) log 
[DBG] : osdmap e64467: 32 total, 32 up, 32 in
2019-05-20 03:38:45.739907 7f1b0a62e700  1 mon.a@0(leader).osd e64468 
e64468: 32 total, 32 up, 32 in
2019-05-20 03:38:45.767072 7f1b0a62e700  0 log_channel(cluster) log 
[DBG] : osdmap e64468: 32 total, 32 up, 32 in
2019-05-20 03:38:46.890384 7f1b0a62e700  1 mon.a@0(leader).osd e64469 
e64469: 32 total, 32 up, 32 in
2019-05-20 03:38:46.916813 7f1b0a62e700  0 log_channel(cluster) log 
[DBG] : osdmap e64469: 32 total, 32 up, 32 in
2019-05-20 03:38:50.117718 7f1b0a62e700  1 mon.a@0(leader).osd e64470 
e64470: 32 total, 32 up, 32 in
2019-05-20 03:38:50.144237 7f1b0a62e700  0 log_channel(cluster) log 
[DBG] : osdmap e64470: 32 total, 32 up, 32 in
2019-05-20 03:38:55.729962 7f1b0a62e700  1 mon.a@0(leader).osd e64471 
e64471: 32 total, 32 up, 32 in
2019-05-20 03:38:55.756512 7f1b0a62e700  0 log_channel(cluster) log 
[DBG] : osdmap e64471: 32 total, 32 up, 32 in
2019-05-20 03:38:56.882171 7f1b0a62e700  1 mon.a@0(leader).osd e64472 
e64472: 32 total, 32 up, 32 in
2019-05-20 03:38:56.908626 7f1b0a62e700  0 log_channel(cluster) log 
[DBG] : osdmap e64472: 32 total, 32 up, 32 in
2019-05-20 03:39:00.138309 7f1b0a62e700  1 mon.a@0(leader).osd e64473 
e64473: 32 total, 32 up, 32 in
2019-05-20 03:39:00.164934 7f1b0a62e700  0 log_channel(cluster) log 
[DBG] : osdmap e64473: 32 total, 32 up, 32 in
2019-05-20 03:39:04.761609 7f1b0a62e700  1 mon.a@0(leader).osd e64474 
e64474: 32 total, 32 up, 32 in
2019-05-20 03:39:04.788323 7f1b0a62e700  0 log_channel(cluster) log 
[DBG] : osdmap e64474: 32 total, 32 up, 32 in
2019-05-20 03:39:09.769143 7f1b0a62e700  1 mon.a@0(leader).osd e64475 
e64475: 32 total, 32 up, 32 in
2019-05-20 03:39:09.769992 7f1b0a62e700  4 rocksdb: 
[/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_AR
CH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/
12.2.12/rpm/el7/BUILD/ceph-12.2.12/src/rocksdb/db/db_impl_write.cc:725] 
[default] New memtable created with log file: #1788789. Immutable 
memtables: 0.
2019-05-20 03:39:09.792308 7f1b0962c700  4 rocksdb: (Original Log Time 
2019/05/20-03:39:09.770033) 
[/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_AR
CH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/
12.2.12/rpm/el7/BUILD/ceph-12.2.12/src/rocksdb/db/db_impl_compaction_flu
sh.cc:1158] Calling FlushMemTableToOutputFile with column family 
[default], flush slots available 1, compaction slots allowed 1, 
compaction slots scheduled 1
2019-05-20 03:39:09.792329 7f1b0962c700  4 rocksdb: 
[/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_AR
CH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/
12.2.12/rpm/el7/BUILD/ceph-12.2.12/src/rocksdb/db/flush_job.cc:264] 
[default] [JOB 57459] Flushing memtable with next log file: 1788789
2019-05-20 03:39:09.792421 7f1b0962c700  4 rocksdb: EVENT_LOG_v1 
{"time_micros": 1558316349792351, "job": 57459, "event": 
"flush_started", "num_memtables": 1, "num_entries": 501, "num_deletes": 
0, "memory_usage": 32873232}
2019-05-20 03:39:09.792427 7f1b0962c700  4 rocksdb: 
[/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_AR
CH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/
12.2.12/rpm/el7/BUILD/ceph-12.2.12/src/rocksdb/db/flush_job.cc:293] 
[default] [JOB 57459] Level-0 flush table #1788790: started
2019-05-20 03:39:09.796258 7f1b0a62e700  0 log_channel(cluster) log 
[DBG] : osdmap e64475: 32 total, 32 up, 32 in
2019-05-20 03:39:09.954310 7f1b0962c700  4 rocksdb: EVENT_LOG_v1 
{"time_micros": 1558316349954290, "cf_name": "default", "job": 57459, 
"event": "table_file_creation", "file_number": 1788790, "file_size": 
28814681, "table_properties": {"data_size": 28751196, "index_size": 
5076, "filter_size": 57484, "raw_key_size": 3919, 
"raw_average_key_size": 22, "raw_value_size": 28744932, 
"raw_average_value_size": 167121, "num_data_blocks": 143, "num_entries": 

172, "filter_policy_name": "rocksdb.BuiltinBloomFilter", "kDeletedKeys": 

"0", "kMergeOperands": "0"}}
2019-05-20 03:39:09.954338 7f1b0962c700  4 rocksdb: 
[/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_AR
CH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/
12.2.12/rpm/el7/BUILD/ceph-12.2.12/src/rocksdb/db/flush_job.cc:319] 
[default] [JOB 57459] Level-0 flush table #1788790: 28814681 bytes OK
2019-05-20 03:39:09.976850 7f1b0962c700  4 rocksdb: (Original Log Time 
2019/05/20-03:39:09.954351) 
[/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_AR
CH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/
12.2.12/rpm/el7/BUILD/ceph-12.2.12/src/rocksdb/db/memtable_list.cc:360] 
[default] Level-0 commit table #1788790 started
2019-05-20 03:39:09.976866 7f1b0962c700  4 rocksdb: (Original Log Time 
2019/05/20-03:39:09.976744) 
[/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_AR
CH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/
12.2.12/rpm/el7/BUILD/ceph-12.2.12/src/rocksdb/db/memtable_list.cc:383] 
[default] Level-0 commit table #1788790: memtable #1 done
2019-05-20 03:39:09.976872 7f1b0962c700  4 rocksdb: (Original Log Time 
2019/05/20-03:39:09.976783) EVENT_LOG_v1 {"time_micros": 
1558316349976770, "job": 57459, "event": "flush_finished", "lsm_state": 
[3, 0, 0, 0, 0, 0, 19], "immutable_memtables": 0}
2019-05-20 03:39:09.976877 7f1b0962c700  4 rocksdb: (Original Log Time 
2019/05/20-03:39:09.976815) 
[/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_AR
CH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/
12.2.12/rpm/el7/BUILD/ceph-12.2.12/src/rocksdb/db/db_impl_compaction_flu
sh.cc:132] [default] Level summary: base level 5 max bytes base 69194285 

files[3 0 0 0 0 0 19] max score 0.75
2019-05-20 03:39:09.976894 7f1b0962c700  4 rocksdb: 
[/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_AR
CH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/
12.2.12/rpm/el7/BUILD/ceph-12.2.12/src/rocksdb/db/db_impl_files.cc:388] 
[JOB 57459] Try to delete WAL files size 28767628, prev total WAL file 
size 28767669, number of live WAL files 2.
2019-05-20 03:39:10.921941 7f1b0a62e700  1 mon.a@0(leader).osd e64476 
e64476: 32 total, 32 up, 32 in
2019-05-20 03:39:10.948342 7f1b0a62e700  0 log_channel(cluster) log 
[DBG] : osdmap e64476: 32 total, 32 up, 32 in
2019-05-20 03:39:14.782433 7f1b0a62e700  1 mon.a@0(leader).osd e64477 
e64477: 32 total, 32 up, 32 in
2019-05-20 03:39:14.809139 7f1b0a62e700  0 log_channel(cluster) log 
[DBG] : osdmap e64477: 32 total, 32 up, 32 in
2019-05-20 03:39:17.014951 7f1b1163c700  0 
mon.a@0(leader).data_health(7108) update_stats avail 91% total 50.0GiB, 
used 4.13GiB, avail 45.9GiB
2019-05-20 03:39:19.801742 7f1b0a62e700  1 mon.a@0(leader).osd e64478 
e64478: 32 total, 32 up, 32 in
2019-05-20 03:39:19.828970 7f1b0a62e700  0 log_channel(cluster) log 
[DBG] : osdmap e64478: 32 total, 32 up, 32 in
2019-05-20 03:39:24.747871 7f1b0a62e700  1 mon.a@0(leader).osd e64479 
e64479: 32 total, 32 up, 32 in
2019-05-20 03:39:24.775203 7f1b0a62e700  0 log_channel(cluster) log 
[DBG] : osdmap e64479: 32 total, 32 up, 32 in
2019-05-20 03:39:29.831760 7f1b0a62e700  1 mon.a@0(leader).osd e64480 
e64480: 32 total, 32 up, 32 in
2019-05-20 03:39:29.859184 7f1b0a62e700  0 log_channel(cluster) log 
[DBG] : osdmap e64480: 32 total, 32 up, 32 in
2019-05-20 03:39:34.747989 7f1b0a62e700  1 mon.a@0(leader).osd e64481 
e64481: 32 total, 32 up, 32 in
2019-05-20 03:39:34.775166 7f1b0a62e700  0 log_channel(cluster) log 
[DBG] : osdmap e64481: 32 total, 32 up, 32 in
2019-05-20 03:39:39.863809 7f1b0a62e700  1 mon.a@0(leader).osd e64482 
e64482: 32 total, 32 up, 32 in
2019-05-20 03:39:39.891227 7f1b0a62e700  0 log_channel(cluster) log 
[DBG] : osdmap e64482: 32 total, 32 up, 32 in
2019-05-20 03:39:44.750833 7f1b0a62e700  1 mon.a@0(leader).osd e64483 
e64483: 32 total, 32 up, 32 in
2019-05-20 03:39:44.778742 7f1b0a62e700  0 log_channel(cluster) log 
[DBG] : osdmap e64483: 32 total, 32 up, 32 in
2019-05-20 03:39:49.900624 7f1b0a62e700  1 mon.a@0(leader).osd e64484 
e64484: 32 total, 32 up, 32 in
2019-05-20 03:39:49.928412 7f1b0a62e700  0 log_channel(cluster) log 
[DBG] : osdmap e64484: 32 total, 32 up, 32 in
2019-05-20 03:39:54.743844 7f1b0a62e700  1 mon.a@0(leader).osd e64485 
e64485: 32 total, 32 up, 32 in
2019-05-20 03:39:54.773021 7f1b0a62e700  0 log_channel(cluster) log 
[DBG] : osdmap e64485: 32 total, 32 up, 32 in
2019-05-20 03:39:59.937640 7f1b0a62e700  1 mon.a@0(leader).osd e64486 
e64486: 32 total, 32 up, 32 in

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux