Re: Ceph 16.2.14: pgmap updated every few seconds for no apparent reason

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

this setting is not as harmless as I assumed. There seem to be more ticks/periods/health_checks involved. When I choose a mgr_tick_period value > 30 seconds the two MGRs keep respawning. 30 seconds are the highest value that still seemed to work without MGR respawn, even with increased mon_mgr_beacon_grace (default 30 sec.). So if you decide to increase the mgr_tick_period don't go over 30 unless you find out what else you need to change.

Regards,
Eugen


Zitat von Eugen Block <eblock@xxxxxx>:

Hi,

you can change the report interval with this config option (default 2 seconds):

$ ceph config get mgr mgr_tick_period
2

$ ceph config set mgr mgr_tick_period 10

Regards,
Eugen

Zitat von Chris Palmer <chris.palmer@xxxxxxxxx>:

I have just checked 2 quincy 17.2.6 clusters, and I see exactly the same. The pgmap version is bumping every two seconds (which ties in with the frequency you observed). Both clusters are healthy with nothing apart from client IO happening.

On 13/10/2023 12:09, Zakhar Kirpichenko wrote:
Hi,

I am investigating excessive mon writes in our cluster and wondering
whether excessive pgmap updates could be the culprit. Basically pgmap is
updated every few seconds, sometimes over ten times per minute, in a
healthy cluster with no OSD and/or PG changes:

Oct 13 11:03:03 ceph03 bash[4019]: cluster 2023-10-13T11:03:01.515438+0000
mgr.ceph01.vankui (mgr.336635131) 838252 : cluster [DBG] pgmap v606575:
2400 pgs: 5 active+clean+scrubbing+deep, 2395 active+clean; 16 TiB data, 61
TiB used, 716 TiB / 777 TiB avail; 60 MiB/s rd, 109 MiB/s wr, 5.65k op/s
Oct 13 11:03:04 ceph03 bash[4019]: cluster 2023-10-13T11:03:03.520953+0000
mgr.ceph01.vankui (mgr.336635131) 838253 : cluster [DBG] pgmap v606576:
2400 pgs: 5 active+clean+scrubbing+deep, 2395 active+clean; 16 TiB data, 61
TiB used, 716 TiB / 777 TiB avail; 64 MiB/s rd, 128 MiB/s wr, 5.76k op/s
Oct 13 11:03:06 ceph03 bash[4019]: cluster 2023-10-13T11:03:05.524474+0000
mgr.ceph01.vankui (mgr.336635131) 838255 : cluster [DBG] pgmap v606577:
2400 pgs: 5 active+clean+scrubbing+deep, 2395 active+clean; 16 TiB data, 61
TiB used, 716 TiB / 777 TiB avail; 64 MiB/s rd, 122 MiB/s wr, 5.57k op/s
Oct 13 11:03:08 ceph03 bash[4019]: cluster 2023-10-13T11:03:07.530484+0000
mgr.ceph01.vankui (mgr.336635131) 838256 : cluster [DBG] pgmap v606578:
2400 pgs: 5 active+clean+scrubbing+deep, 2395 active+clean; 16 TiB data, 61
TiB used, 716 TiB / 777 TiB avail; 79 MiB/s rd, 127 MiB/s wr, 6.62k op/s
Oct 13 11:03:10 ceph03 bash[4019]: cluster 2023-10-13T11:03:09.533337+0000
mgr.ceph01.vankui (mgr.336635131) 838258 : cluster [DBG] pgmap v606579:
2400 pgs: 5 active+clean+scrubbing+deep, 2395 active+clean; 16 TiB data, 61
TiB used, 716 TiB / 777 TiB avail; 66 MiB/s rd, 104 MiB/s wr, 5.38k op/s
Oct 13 11:03:12 ceph03 bash[4019]: cluster 2023-10-13T11:03:11.537908+0000
mgr.ceph01.vankui (mgr.336635131) 838259 : cluster [DBG] pgmap v606580:
2400 pgs: 5 active+clean+scrubbing+deep, 2395 active+clean; 16 TiB data, 61
TiB used, 716 TiB / 777 TiB avail; 85 MiB/s rd, 121 MiB/s wr, 6.43k op/s
Oct 13 11:03:13 ceph03 bash[4019]: cluster 2023-10-13T11:03:13.543490+0000
mgr.ceph01.vankui (mgr.336635131) 838260 : cluster [DBG] pgmap v606581:
2400 pgs: 5 active+clean+scrubbing+deep, 2395 active+clean; 16 TiB data, 61
TiB used, 716 TiB / 777 TiB avail; 78 MiB/s rd, 127 MiB/s wr, 6.54k op/s
Oct 13 11:03:16 ceph03 bash[4019]: cluster 2023-10-13T11:03:15.547122+0000
mgr.ceph01.vankui (mgr.336635131) 838262 : cluster [DBG] pgmap v606582:
2400 pgs: 5 active+clean+scrubbing+deep, 2395 active+clean; 16 TiB data, 61
TiB used, 716 TiB / 777 TiB avail; 71 MiB/s rd, 122 MiB/s wr, 6.08k op/s
Oct 13 11:03:18 ceph03 bash[4019]: cluster 2023-10-13T11:03:17.553180+0000
mgr.ceph01.vankui (mgr.336635131) 838263 : cluster [DBG] pgmap v606583:
2400 pgs: 1 active+clean+scrubbing, 5 active+clean+scrubbing+deep, 2394
active+clean; 16 TiB data, 61 TiB used, 716 TiB / 777 TiB avail; 75 MiB/s
rd, 176 MiB/s wr, 6.83k op/s
Oct 13 11:03:20 ceph03 bash[4019]: cluster 2023-10-13T11:03:19.555960+0000
mgr.ceph01.vankui (mgr.336635131) 838264 : cluster [DBG] pgmap v606584:
2400 pgs: 1 active+clean+scrubbing, 5 active+clean+scrubbing+deep, 2394
active+clean; 16 TiB data, 61 TiB used, 716 TiB / 777 TiB avail; 58 MiB/s
rd, 161 MiB/s wr, 5.55k op/s
Oct 13 11:03:22 ceph03 bash[4019]: cluster 2023-10-13T11:03:21.560597+0000
mgr.ceph01.vankui (mgr.336635131) 838266 : cluster [DBG] pgmap v606585:
2400 pgs: 1 active+clean+scrubbing, 5 active+clean+scrubbing+deep, 2394
active+clean; 16 TiB data, 61 TiB used, 716 TiB / 777 TiB avail; 62 MiB/s
rd, 221 MiB/s wr, 6.19k op/s
Oct 13 11:03:24 ceph03 bash[4019]: cluster 2023-10-13T11:03:23.565974+0000
mgr.ceph01.vankui (mgr.336635131) 838267 : cluster [DBG] pgmap v606586:
2400 pgs: 1 active+clean+scrubbing, 5 active+clean+scrubbing+deep, 2394
active+clean; 16 TiB data, 61 TiB used, 716 TiB / 777 TiB avail; 50 MiB/s
rd, 246 MiB/s wr, 5.93k op/s
Oct 13 11:03:26 ceph03 bash[4019]: cluster 2023-10-13T11:03:25.569471+0000
mgr.ceph01.vankui (mgr.336635131) 838269 : cluster [DBG] pgmap v606587:
2400 pgs: 1 active+clean+scrubbing, 5 active+clean+scrubbing+deep, 2394
active+clean; 16 TiB data, 61 TiB used, 716 TiB / 777 TiB avail; 41 MiB/s
rd, 240 MiB/s wr, 4.99k op/s
Oct 13 11:03:28 ceph03 bash[4019]: cluster 2023-10-13T11:03:27.575618+0000
mgr.ceph01.vankui (mgr.336635131) 838270 : cluster [DBG] pgmap v606588:
2400 pgs: 4 active+clean+scrubbing+deep, 2396 active+clean; 16 TiB data, 61
TiB used, 716 TiB / 777 TiB avail; 44 MiB/s rd, 259 MiB/s wr, 5.38k op/s
Oct 13 11:03:30 ceph03 bash[4019]: cluster 2023-10-13T11:03:29.578262+0000
mgr.ceph01.vankui (mgr.336635131) 838271 : cluster [DBG] pgmap v606589:
2400 pgs: 4 active+clean+scrubbing+deep, 2396 active+clean; 16 TiB data, 61
TiB used, 716 TiB / 777 TiB avail; 31 MiB/s rd, 195 MiB/s wr, 4.06k op/s
Oct 13 11:03:32 ceph03 bash[4019]: cluster 2023-10-13T11:03:31.582849+0000
mgr.ceph01.vankui (mgr.336635131) 838272 : cluster [DBG] pgmap v606590:
2400 pgs: 4 active+clean+scrubbing+deep, 2396 active+clean; 16 TiB data, 61
TiB used, 716 TiB / 777 TiB avail; 35 MiB/s rd, 197 MiB/s wr, 4.43k op/s
Oct 13 11:03:34 ceph03 bash[4019]: cluster 2023-10-13T11:03:33.588249+0000
mgr.ceph01.vankui (mgr.336635131) 838274 : cluster [DBG] pgmap v606591:
2400 pgs: 4 active+clean+scrubbing+deep, 2396 active+clean; 16 TiB data, 61
TiB used, 716 TiB / 777 TiB avail; 39 MiB/s rd, 138 MiB/s wr, 4.50k op/s
Oct 13 11:03:36 ceph03 bash[4019]: cluster 2023-10-13T11:03:35.591837+0000
mgr.ceph01.vankui (mgr.336635131) 838276 : cluster [DBG] pgmap v606592:
2400 pgs: 4 active+clean+scrubbing+deep, 2396 active+clean; 16 TiB data, 61
TiB used, 716 TiB / 777 TiB avail; 35 MiB/s rd, 92 MiB/s wr, 3.90k op/s
Oct 13 11:03:37 ceph03 bash[4019]: cluster 2023-10-13T11:03:37.597899+0000
mgr.ceph01.vankui (mgr.336635131) 838277 : cluster [DBG] pgmap v606593:
2400 pgs: 4 active+clean+scrubbing+deep, 2396 active+clean; 16 TiB data, 61
TiB used, 716 TiB / 777 TiB avail; 42 MiB/s rd, 77 MiB/s wr, 4.47k op/s
Oct 13 11:03:39 ceph03 bash[4019]: cluster 2023-10-13T11:03:39.600591+0000
mgr.ceph01.vankui (mgr.336635131) 838278 : cluster [DBG] pgmap v606594:
2400 pgs: 4 active+clean+scrubbing+deep, 2396 active+clean; 16 TiB data, 61
TiB used, 716 TiB / 777 TiB avail; 34 MiB/s rd, 39 MiB/s wr, 3.61k op/s
Oct 13 11:03:41 ceph03 bash[4019]: cluster 2023-10-13T11:03:41.605546+0000
mgr.ceph01.vankui (mgr.336635131) 838279 : cluster [DBG] pgmap v606595:
2400 pgs: 1 active+clean+scrubbing, 4 active+clean+scrubbing+deep, 2395
active+clean; 16 TiB data, 61 TiB used, 716 TiB / 777 TiB avail; 41 MiB/s
rd, 42 MiB/s wr, 4.21k op/s
Oct 13 11:03:43 ceph03 bash[4019]: cluster 2023-10-13T11:03:43.610694+0000
mgr.ceph01.vankui (mgr.336635131) 838281 : cluster [DBG] pgmap v606596:
2400 pgs: 1 active+clean+scrubbing, 4 active+clean+scrubbing+deep, 2395
active+clean; 16 TiB data, 61 TiB used, 716 TiB / 777 TiB avail; 49 MiB/s
rd, 49 MiB/s wr, 4.59k op/s
Oct 13 11:03:45 ceph03 bash[4019]: cluster 2023-10-13T11:03:45.614721+0000
mgr.ceph01.vankui (mgr.336635131) 838283 : cluster [DBG] pgmap v606597:
2400 pgs: 1 active+clean+scrubbing, 4 active+clean+scrubbing+deep, 2395
active+clean; 16 TiB data, 61 TiB used, 716 TiB / 777 TiB avail; 44 MiB/s
rd, 49 MiB/s wr, 4.15k op/s

Within 14 hours of today pgmap was updated 19960 times for no apparent
reason. The cluster is healthy and nothing is going on apart from normal
I/O.

Are frequent pgmap updates expected behavior? What may be causing these
updates?

I would very much appreciate your comments and suggestions.

Best regards,
Zakhar
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux