Re: How to reset Log Levels

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Is this still debug output or "normal"?:

Nov 04 10:19:39 ceph01 bash[2648]: audit
2020-11-04T09:19:38.577088+0000 mon.ceph03 (mon.0) 7738 : audit [DBG]
from='mgr.42824785 10.10.2.103:0/3293316818' entity='mgr.ceph03'
cmd=[{"prefix": "mds metadata", "who": "cephfs.ceph04.hrcvab"}]:
dispatch
Nov 04 10:19:40 ceph01 bash[2648]: cluster
2020-11-04T09:19:38.997145+0000 mgr.ceph03 (mgr.42824785) 212 :
cluster [DBG] pgmap v214: 2113 pgs: 1 active+clean+scrubbing, 37
active+remapped+backfill_wait, 9 active+remapped+backfilling, 2066
active+clean; 36 TiB data, 112 TiB used, 59 TiB / 172 TiB avail; 66
MiB/s rd, 64 MiB/s wr, 738 op/s; 202190/34023663 objects misplaced
(0.594%)
Nov 04 10:19:40 ceph01 bash[2648]: audit
2020-11-04T09:19:39.578221+0000 mon.ceph03 (mon.0) 7739 : audit [DBG]
from='mgr.42824785 10.10.2.103:0/3293316818' entity='mgr.ceph03'
cmd=[{"prefix": "mds metadata", "who": "cephfs.ceph04.hrcvab"}]:
dispatch
Nov 04 10:19:41 ceph01 bash[2648]: audit
2020-11-04T09:19:40.578383+0000 mon.ceph03 (mon.0) 7740 : audit [DBG]
from='mgr.42824785 10.10.2.103:0/3293316818' entity='mgr.ceph03'
cmd=[{"prefix": "mds metadata", "who": "cephfs.ceph04.hrcvab"}]:
dispatch
Nov 04 10:19:42 ceph01 bash[2648]: cluster
2020-11-04T09:19:41.003992+0000 mgr.ceph03 (mgr.42824785) 213 :
cluster [DBG] pgmap v215: 2113 pgs: 1 active+clean+scrubbing, 37
active+remapped+backfill_wait, 8 active+remapped+backfilling, 2067
active+clean; 36 TiB data, 112 TiB used, 59 TiB / 172 TiB avail; 56
MiB/s rd, 53 MiB/s wr, 639 op/s; 202029/34023711 objects misplaced
(0.594%)
Nov 04 10:19:42 ceph01 bash[2648]: audit
2020-11-04T09:19:41.577839+0000 mon.ceph03 (mon.0) 7741 : audit [DBG]
from='mgr.42824785 10.10.2.103:0/3293316818' entity='mgr.ceph03'
cmd=[{"prefix": "mds metadata", "who": "cephfs.ceph04.hrcvab"}]:
dispatch
Nov 04 10:19:43 ceph01 bash[2648]: debug 2020-11-04T09:19:43.139+0000
7f173724d700  1 mon.ceph01@1(peon).osd e638679 _set_new_cache_sizes
cache_size:1020054731 inc_alloc: 146800640 full_alloc: 163577856
kv_alloc: 704643072
Nov 04 10:19:43 ceph01 bash[2648]: audit
2020-11-04T09:19:42.578270+0000 mon.ceph03 (mon.0) 7742 : audit [DBG]
from='mgr.42824785 10.10.2.103:0/3293316818' entity='mgr.ceph03'
cmd=[{"prefix": "mds metadata", "who": "cephfs.ceph04.hrcvab"}]:
dispatch
Nov 04 10:19:44 ceph01 bash[2648]: cluster
2020-11-04T09:19:43.008288+0000 mgr.ceph03 (mgr.42824785) 214 :
cluster [DBG] pgmap v216: 2113 pgs: 1 active+clean+scrubbing, 37
active+remapped+backfill_wait, 8 active+remapped+backfilling, 2067
active+clean; 36 TiB data, 112 TiB used, 59 TiB / 172 TiB avail; 37
MiB/s rd, 24 MiB/s wr, 416 op/s; 202029/34023735 objects misplaced
(0.594%); 132 MiB/s, 34 objects/s recovering
Nov 04 10:19:44 ceph01 bash[2648]: audit
2020-11-04T09:19:43.578476+0000 mon.ceph03 (mon.0) 7743 : audit [DBG]
from='mgr.42824785 10.10.2.103:0/3293316818' entity='mgr.ceph03'
cmd=[{"prefix": "mds metadata", "who": "cephfs.ceph04.hrcvab"}]:
dispatch
Nov 04 10:19:45 ceph01 bash[2648]: audit
2020-11-04T09:19:44.578161+0000 mon.ceph03 (mon.0) 7744 : audit [DBG]
from='mgr.42824785 10.10.2.103:0/3293316818' entity='mgr.ceph03'
cmd=[{"prefix": "mds metadata", "who": "cephfs.ceph04.hrcvab"}]:
dispatch
Nov 04 10:19:45 ceph01 bash[2648]: cluster
2020-11-04T09:19:45.022173+0000 mgr.ceph03 (mgr.42824785) 215 :
cluster [DBG] pgmap v217: 2113 pgs: 1 active+clean+scrubbing, 37
active+remapped+backfill_wait, 8 active+remapped+backfilling, 2067
active+clean; 36 TiB data, 112 TiB used, 59 TiB / 172 TiB avail; 71
MiB/s rd, 20 MiB/s wr, 754 op/s; 201814/34023918 objects misplaced
(0.593%); 211 MiB/s, 55 objects/s recovering
Nov 04 10:19:46 ceph01 bash[2648]: audit
2020-11-04T09:19:45.579026+0000 mon.ceph03 (mon.0) 7745 : audit [DBG]
from='mgr.42824785 10.10.2.103:0/3293316818' entity='mgr.ceph03'
cmd=[{"prefix": "mds metadata", "who": "cephfs.ceph04.hrcvab"}]:
dispatch
Nov 04 10:19:47 ceph01 bash[2648]: audit
2020-11-04T09:19:46.579195+0000 mon.ceph03 (mon.0) 7746 : audit [DBG]
from='mgr.42824785 10.10.2.103:0/3293316818' entity='mgr.ceph03'
cmd=[{"prefix": "mds metadata", "who": "cephfs.ceph04.hrcvab"}]:
dispatch
Nov 04 10:19:47 ceph01 bash[2648]: cluster
2020-11-04T09:19:47.026027+0000 mgr.ceph03 (mgr.42824785) 216 :
cluster [DBG] pgmap v218: 2113 pgs: 1 active+clean+scrubbing, 37
active+remapped+backfill_wait, 8 active+remapped+backfilling, 2067
active+clean; 36 TiB data, 112 TiB used, 59 TiB / 172 TiB avail; 63
MiB/s rd, 17 MiB/s wr, 695 op/s; 201787/34024164 objects misplaced
(0.593%); 186 MiB/s, 48 objects/s recovering
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux