Re: pgmap version increasing like every second ok or excessive?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I suppose it's normal, I haven't looked to deep into it, but you can change the report interval with this config option (default 2 seconds):

$ ceph config get mgr mgr_tick_period
2

$ ceph config set mgr mgr_tick_period 10

Be also aware of my warning:

When I choose a mgr_tick_period value > 30 seconds the two MGRs keep respawning. 30 seconds are the
highest value that still seemed to work without MGR respawn, even with increased mon_mgr_beacon_grace (default 30 sec.). So if you decide to increase the mgr_tick_period don't go over 30 unless you find out what
else you need to change.

Regards,
Eugen

[0] https://lists.ceph.io/hyperkitty/list/ceph-users@xxxxxxx/thread/AHXJV74QM73CR4EQPV7QHEBRLJBSUJSE/#M5C5JV3N72KZSDX7XDRCAYEVIH3QJFEV

Zitat von Andreas Elvers <andreas.elvers+lists.ceph.io@xxxxxxx>:

Hello,

I see this in my logs:

2025-01-22T09:14:43.063966+0000 mgr.node1.joznex (mgr.584732) 337151 : cluster [DBG] pgmap v300985: 497 pgs: 497 active+clean; 9.5 TiB data, 29 TiB used, 48 TiB / 76 TiB avail 2025-01-22T09:14:45.066685+0000 mgr.node1.joznex (mgr.584732) 337154 : cluster [DBG] pgmap v300986: 497 pgs: 497 active+clean; 9.5 TiB data, 29 TiB used, 48 TiB / 76 TiB avail 2025-01-22T09:14:47.070458+0000 mgr.node1.joznex (mgr.584732) 337155 : cluster [DBG] pgmap v300987: 497 pgs: 497 active+clean; 9.5 TiB data, 29 TiB used, 48 TiB / 76 TiB avail 2025-01-22T09:14:49.074664+0000 mgr.node1.joznex (mgr.584732) 337158 : cluster [DBG] pgmap v300988: 497 pgs: 497 active+clean; 9.5 TiB data, 29 TiB used, 48 TiB / 76 TiB avail 2025-01-22T09:14:51.079225+0000 mgr.node1.joznex (mgr.584732) 337159 : cluster [DBG] pgmap v300989: 497 pgs: 497 active+clean; 9.5 TiB data, 29 TiB used, 48 TiB / 76 TiB avail 2025-01-22T09:14:53.081633+0000 mgr.node1.joznex (mgr.584732) 337160 : cluster [DBG] pgmap v300990: 497 pgs: 497 active+clean; 9.5 TiB data, 29 TiB used, 48 TiB / 76 TiB avail 2025-01-22T09:14:55.084216+0000 mgr.node1.joznex (mgr.584732) 337163 : cluster [DBG] pgmap v300991: 497 pgs: 497 active+clean; 9.5 TiB data, 29 TiB used, 48 TiB / 76 TiB avail 2025-01-22T09:14:57.087873+0000 mgr.node1.joznex (mgr.584732) 337164 : cluster [DBG] pgmap v300992: 497 pgs: 497 active+clean; 9.5 TiB data, 29 TiB used, 48 TiB / 76 TiB avail 2025-01-22T09:14:59.092225+0000 mgr.node1.joznex (mgr.584732) 337175 : cluster [DBG] pgmap v300993: 497 pgs: 497 active+clean; 9.5 TiB data, 29 TiB used, 48 TiB / 76 TiB avail 2025-01-22T09:15:01.096803+0000 mgr.node1.joznex (mgr.584732) 337176 : cluster [DBG] pgmap v300994: 497 pgs: 497 active+clean; 9.5 TiB data, 29 TiB used, 48 TiB / 76 TiB avail

Cluster is healthy. 3 nodes with 2 HDD OSDs each. pgmap version is increasing like every second. Is this excessive or is it deemed normal?
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux