Re: Active+clean PGs reported many times in log

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Nov 14, 2017 at 1:09 AM Matteo Dacrema <mdacrema@xxxxxxxx> wrote:
Hi,
I noticed that sometimes the monitors start to log active+clean pgs many times in the same line. For example I have 18432 and the logs shows " 2136 active+clean, 28 active+clean, 2 active+clean+scrubbing+deep, 16266 active+clean;”
After a minute monitor start to log correctly again.

Is it normal ?

That definitely looks weird to me, but I can imagine a few ways for it to occur. What version of Ceph are you running? Can you extract the pgmap and post the binary somewhere?
 

2017-11-13 11:05:08.876724 7fb35d17d700  0 log_channel(cluster) log [INF] : pgmap v99797105: 18432 pgs: 3 active+clean+scrubbing+deep, 18429 active+clean; 59520 GB data, 129 TB used, 110 TB / 239 TB avail; 40596 kB/s rd, 89723 kB/s wr, 4899 op/s
2017-11-13 11:05:09.911266 7fb35d17d700  0 log_channel(cluster) log [INF] : pgmap v99797106: 18432 pgs: 2 active+clean+scrubbing+deep, 18430 active+clean; 59520 GB data, 129 TB used, 110 TB / 239 TB avail; 45931 kB/s rd, 114 MB/s wr, 6179 op/s
2017-11-13 11:05:10.751378 7fb359cfb700  0 mon.controller001@0(leader) e1 handle_command mon_command({"prefix": "osd pool stats", "format": "json"} v 0) v1
2017-11-13 11:05:10.751599 7fb359cfb700  0 log_channel(audit) log [DBG] : from='client.? 10.16.24.127:0/547552484' entity='client.telegraf' cmd=[{"prefix": "osd pool stats", "format": "json"}]: dispatch
2017-11-13 11:05:10.926839 7fb35d17d700  0 log_channel(cluster) log [INF] : pgmap v99797107: 18432 pgs: 3 active+clean+scrubbing+deep, 18429 active+clean; 59520 GB data, 129 TB used, 110 TB / 239 TB avail; 47617 kB/s rd, 134 MB/s wr, 7414 op/s
2017-11-13 11:05:11.921115 7fb35d17d700  1 mon.controller001@0(leader).osd e120942 e120942: 216 osds: 216 up, 216 in
2017-11-13 11:05:11.926818 7fb35d17d700  0 log_channel(cluster) log [INF] : osdmap e120942: 216 osds: 216 up, 216 in
2017-11-13 11:05:11.984732 7fb35d17d700  0 log_channel(cluster) log [INF] : pgmap v99797109: 18432 pgs: 3 active+clean+scrubbing+deep, 18429 active+clean; 59520 GB data, 129 TB used, 110 TB / 239 TB avail; 54110 kB/s rd, 115 MB/s wr, 7827 op/s
2017-11-13 11:05:13.085799 7fb35d17d700  0 log_channel(cluster) log [INF] : pgmap v99797110: 18432 pgs: 973 active+clean, 12 active+clean, 3 active+clean+scrubbing+deep, 17444 active+clean; 59520 GB data, 129 TB used, 110 TB / 239 TB avail; 115 MB/s rd, 90498 kB/s wr, 8490 op/s
2017-11-13 11:05:14.181219 7fb35d17d700  0 log_channel(cluster) log [INF] : pgmap v99797111: 18432 pgs: 2136 active+clean, 28 active+clean, 2 active+clean+scrubbing+deep, 16266 active+clean; 59520 GB data, 129 TB used, 110 TB / 239 TB avail; 136 MB/s rd, 94461 kB/s wr, 10237 op/s
2017-11-13 11:05:15.324630 7fb35d17d700  0 log_channel(cluster) log [INF] : pgmap v99797112: 18432 pgs: 3179 active+clean, 44 active+clean, 2 active+clean+scrubbing+deep, 15207 active+clean; 59519 GB data, 129 TB used, 110 TB / 239 TB avail; 184 MB/s rd, 81743 kB/s wr, 13786 op/s
2017-11-13 11:05:16.381452 7fb35d17d700  0 log_channel(cluster) log [INF] : pgmap v99797113: 18432 pgs: 3600 active+clean, 52 active+clean, 2 active+clean+scrubbing+deep, 14778 active+clean; 59518 GB data, 129 TB used, 110 TB / 239 TB avail; 208 MB/s rd, 77342 kB/s wr, 14382 op/s
2017-11-13 11:05:17.272757 7fb3570f2700  1 leveldb: Level-0 table #26314650: started
2017-11-13 11:05:17.390808 7fb3570f2700  1 leveldb: Level-0 table #26314650: 18281928 bytes OK
2017-11-13 11:05:17.392636 7fb3570f2700  1 leveldb: Delete type=0 #26314647

2017-11-13 11:05:17.397516 7fb3570f2700  1 leveldb: Manual compaction at level-0 from 'pgmap\x0099796362' @ 72057594037927935 : 1 .. 'pgmap\x0099796613' @ 0 : 0; will stop at 'pgmap_pg\x006.ff' @ 29468156273 : 1


Thank you
Matteo

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux