Hello, I see this in my logs: 2025-01-22T09:14:43.063966+0000 mgr.node1.joznex (mgr.584732) 337151 : cluster [DBG] pgmap v300985: 497 pgs: 497 active+clean; 9.5 TiB data, 29 TiB used, 48 TiB / 76 TiB avail 2025-01-22T09:14:45.066685+0000 mgr.node1.joznex (mgr.584732) 337154 : cluster [DBG] pgmap v300986: 497 pgs: 497 active+clean; 9.5 TiB data, 29 TiB used, 48 TiB / 76 TiB avail 2025-01-22T09:14:47.070458+0000 mgr.node1.joznex (mgr.584732) 337155 : cluster [DBG] pgmap v300987: 497 pgs: 497 active+clean; 9.5 TiB data, 29 TiB used, 48 TiB / 76 TiB avail 2025-01-22T09:14:49.074664+0000 mgr.node1.joznex (mgr.584732) 337158 : cluster [DBG] pgmap v300988: 497 pgs: 497 active+clean; 9.5 TiB data, 29 TiB used, 48 TiB / 76 TiB avail 2025-01-22T09:14:51.079225+0000 mgr.node1.joznex (mgr.584732) 337159 : cluster [DBG] pgmap v300989: 497 pgs: 497 active+clean; 9.5 TiB data, 29 TiB used, 48 TiB / 76 TiB avail 2025-01-22T09:14:53.081633+0000 mgr.node1.joznex (mgr.584732) 337160 : cluster [DBG] pgmap v300990: 497 pgs: 497 active+clean; 9.5 TiB data, 29 TiB used, 48 TiB / 76 TiB avail 2025-01-22T09:14:55.084216+0000 mgr.node1.joznex (mgr.584732) 337163 : cluster [DBG] pgmap v300991: 497 pgs: 497 active+clean; 9.5 TiB data, 29 TiB used, 48 TiB / 76 TiB avail 2025-01-22T09:14:57.087873+0000 mgr.node1.joznex (mgr.584732) 337164 : cluster [DBG] pgmap v300992: 497 pgs: 497 active+clean; 9.5 TiB data, 29 TiB used, 48 TiB / 76 TiB avail 2025-01-22T09:14:59.092225+0000 mgr.node1.joznex (mgr.584732) 337175 : cluster [DBG] pgmap v300993: 497 pgs: 497 active+clean; 9.5 TiB data, 29 TiB used, 48 TiB / 76 TiB avail 2025-01-22T09:15:01.096803+0000 mgr.node1.joznex (mgr.584732) 337176 : cluster [DBG] pgmap v300994: 497 pgs: 497 active+clean; 9.5 TiB data, 29 TiB used, 48 TiB / 76 TiB avail Cluster is healthy. 3 nodes with 2 HDD OSDs each. pgmap version is increasing like every second. Is this excessive or is it deemed normal? _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx