snap_schedule works after 1 hour of scheduling

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Team,Milind

*Ceph-version:* Quincy, Reef
*OS:* Almalinux 8

*Issue:* snap_schedule works after 1 hour of schedule

*Description:*

We are currently working in a 3-node ceph cluster.
We are currently exploring the scheduled snapshot capability of the
ceph-mgr module.
To enable/configure scheduled snapshots, we followed the following link:



https://docs.ceph.com/en/quincy/cephfs/snap-schedule/



We were able to create snap schedules for the subvolumes as suggested.
But we have observed a two very strange behaviour:
1. The snap_schedules only work when we restart the ceph-mgr service on the
mgr node:
We then restarted the mgr-service on the active mgr node, and after 1 hour
it started getting created. I am attaching the log file for the same after
restart. Thre behaviour looks abnormal.

So,  for eg consider the below output:
```
[root@storagenode-1 ~]# ceph fs snap-schedule status
/volumes/subvolgrp/test3
{"fs": "cephfs", "subvol": null, "path": "/volumes/subvolgrp/test3",
"rel_path": "/volumes/subvolgrp/test3", "schedule": "1h", "retention": {},
"start": "2023-10-04T07:20:00", "created": "2023-10-04T07:18:41", "first":
"2023-10-04T08:20:00", "last": "2023-10-04T09:20:00", "last_pruned": null,
"created_count": 2, "pruned_count": 0, "active": true}
[root@storagenode-1 ~]#
```
As we can see in the above o/p, we created the schedule at
2023-10-04T07:18:41. The schedule was suppose to start at
2023-10-04T07:20:00 but it started at 2023-10-04T08:20:00

Any input w.r.t the same will be of great help.

Thanks and Regards
Kushagra Gupta
2023-10-03T03:50:15.857+0000 7f6971c0e700 -1 mgr handle_mgr_signal  *** Got signal Terminated ***
2023-10-03T03:50:16.659+0000 7f9d9da97200  0 set uid:gid to 167:167 (ceph:ceph)
2023-10-03T03:50:16.659+0000 7f9d9da97200  0 ceph version 18.2.0 (5dd24139a1eada541a3bc16b6941c5dde975e26d) reef (stable), process ceph-mgr, pid 781047
2023-10-03T03:50:16.660+0000 7f9d9da97200  0 pidfile_write: ignore empty --pid-file
2023-10-03T03:50:16.726+0000 7f9d9da97200  1 mgr[py] Loading python module 'alerts'
2023-10-03T03:50:16.975+0000 7f9d9da97200 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
2023-10-03T03:50:16.975+0000 7f9d9da97200  1 mgr[py] Loading python module 'balancer'
2023-10-03T03:50:17.126+0000 7f9d9da97200 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
2023-10-03T03:50:17.126+0000 7f9d9da97200  1 mgr[py] Loading python module 'crash'
2023-10-03T03:50:17.291+0000 7f9d9da97200 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
2023-10-03T03:50:17.291+0000 7f9d9da97200  1 mgr[py] Loading python module 'devicehealth'
2023-10-03T03:50:17.435+0000 7f9d9da97200 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
2023-10-03T03:50:17.435+0000 7f9d9da97200  1 mgr[py] Loading python module 'influx'
2023-10-03T03:50:17.580+0000 7f9d9da97200 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
2023-10-03T03:50:17.580+0000 7f9d9da97200  1 mgr[py] Loading python module 'insights'
2023-10-03T03:50:17.697+0000 7f9d9da97200  1 mgr[py] Loading python module 'iostat'
2023-10-03T03:50:17.830+0000 7f9d9da97200 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
2023-10-03T03:50:17.830+0000 7f9d9da97200  1 mgr[py] Loading python module 'localpool'
2023-10-03T03:50:17.933+0000 7f9d9da97200  1 mgr[py] Loading python module 'mds_autoscaler'
2023-10-03T03:50:18.118+0000 7f9d9da97200  1 mgr[py] Loading python module 'mirroring'
2023-10-03T03:50:18.287+0000 7f9d9da97200  1 mgr[py] Loading python module 'nfs'
2023-10-03T03:50:18.491+0000 7f9d9da97200 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
2023-10-03T03:50:18.491+0000 7f9d9da97200  1 mgr[py] Loading python module 'orchestrator'
2023-10-03T03:50:18.673+0000 7f9d9da97200 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
2023-10-03T03:50:18.673+0000 7f9d9da97200  1 mgr[py] Loading python module 'osd_perf_query'
2023-10-03T03:50:18.859+0000 7f9d9da97200 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
2023-10-03T03:50:18.859+0000 7f9d9da97200  1 mgr[py] Loading python module 'osd_support'
2023-10-03T03:50:18.964+0000 7f9d9da97200 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
2023-10-03T03:50:18.964+0000 7f9d9da97200  1 mgr[py] Loading python module 'pg_autoscaler'
2023-10-03T03:50:19.083+0000 7f9d9da97200 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
2023-10-03T03:50:19.083+0000 7f9d9da97200  1 mgr[py] Loading python module 'progress'
2023-10-03T03:50:19.285+0000 7f9d9da97200 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
2023-10-03T03:50:19.285+0000 7f9d9da97200  1 mgr[py] Loading python module 'prometheus'
2023-10-03T03:50:19.680+0000 7f9d9da97200 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
2023-10-03T03:50:19.680+0000 7f9d9da97200  1 mgr[py] Loading python module 'rbd_support'
2023-10-03T03:50:19.821+0000 7f9d9da97200 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
2023-10-03T03:50:19.821+0000 7f9d9da97200  1 mgr[py] Loading python module 'restful'
2023-10-03T03:50:20.184+0000 7f9d9da97200  1 mgr[py] Loading python module 'selftest'
2023-10-03T03:50:20.304+0000 7f9d9da97200 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
2023-10-03T03:50:20.304+0000 7f9d9da97200  1 mgr[py] Loading python module 'snap_schedule'
2023-10-03T03:50:20.425+0000 7f9d9da97200 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
2023-10-03T03:50:20.425+0000 7f9d9da97200  1 mgr[py] Loading python module 'stats'
2023-10-03T03:50:20.535+0000 7f9d9da97200  1 mgr[py] Loading python module 'status'
2023-10-03T03:50:20.655+0000 7f9d9da97200 -1 mgr[py] Module status has missing NOTIFY_TYPES member
2023-10-03T03:50:20.655+0000 7f9d9da97200  1 mgr[py] Loading python module 'telegraf'
2023-10-03T03:50:20.939+0000 7f9d9da97200 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
2023-10-03T03:50:20.939+0000 7f9d9da97200  1 mgr[py] Loading python module 'telemetry'
2023-10-03T03:50:21.130+0000 7f9d9da97200 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
2023-10-03T03:50:21.130+0000 7f9d9da97200  1 mgr[py] Loading python module 'test_orchestrator'
2023-10-03T03:50:21.322+0000 7f9d9da97200 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
2023-10-03T03:50:21.322+0000 7f9d9da97200  1 mgr[py] Loading python module 'volumes'
2023-10-03T03:50:21.559+0000 7f9d9da97200 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
2023-10-03T03:50:21.559+0000 7f9d9da97200  1 mgr[py] Loading python module 'zabbix'
2023-10-03T03:50:21.673+0000 7f9d9da97200 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
2023-10-03T03:50:21.673+0000 7f9d9da97200  1 mgr[py] Loading python module 'dashboard'
2023-10-03T03:50:22.522+0000 7f9d9da97200  1 mgr[py] Loading python module 'rgw'
2023-10-03T03:50:22.748+0000 7f9d9da97200 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
2023-10-03T03:50:22.767+0000 7f9d8fc5d700  0 ms_deliver_dispatch: unhandled message 0x55e8f70e6f20 mon_map magic: 0 v1 from mon.0 v2:[abcd:abcd:abcd::34]:3300/0
2023-10-03T03:50:22.817+0000 7f9d8fc5d700  1 mgr handle_mgr_map Activating!
2023-10-03T03:50:22.819+0000 7f9d8fc5d700  1 mgr handle_mgr_map I am now activating
2023-10-03T03:50:22.866+0000 7f9d6bbc7700  0 [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
2023-10-03T03:50:22.867+0000 7f9d6bbc7700  1 mgr load Constructed class from module: balancer
2023-10-03T03:50:22.867+0000 7f9d65bbb700  0 [balancer INFO root] Starting
2023-10-03T03:50:22.869+0000 7f9d6bbc7700  0 [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
2023-10-03T03:50:22.869+0000 7f9d6bbc7700  1 mgr load Constructed class from module: crash
2023-10-03T03:50:22.876+0000 7f9d6bbc7700  0 [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
2023-10-03T03:50:22.876+0000 7f9d6bbc7700  1 mgr load Constructed class from module: devicehealth
2023-10-03T03:50:22.876+0000 7f9d65bbb700  0 [balancer INFO root] Optimize plan auto_2023-10-03_03:50:22
2023-10-03T03:50:22.876+0000 7f9d65bbb700  0 [balancer INFO root] Mode upmap, max misplaced 0.050000
2023-10-03T03:50:22.877+0000 7f9d65bbb700  0 [balancer INFO root] Some PGs (1.000000) are unknown; try again later
2023-10-03T03:50:22.879+0000 7f9d63bb7700  0 [devicehealth INFO root] Starting
2023-10-03T03:50:22.887+0000 7f9d6bbc7700  0 [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
2023-10-03T03:50:22.887+0000 7f9d6bbc7700  1 mgr load Constructed class from module: iostat
2023-10-03T03:50:22.889+0000 7f9d6bbc7700  0 [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
2023-10-03T03:50:22.889+0000 7f9d6bbc7700  1 mgr load Constructed class from module: nfs
2023-10-03T03:50:22.891+0000 7f9d6bbc7700  0 [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
2023-10-03T03:50:22.891+0000 7f9d6bbc7700  1 mgr load Constructed class from module: orchestrator
2023-10-03T03:50:22.892+0000 7f9d6bbc7700  0 [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
2023-10-03T03:50:22.892+0000 7f9d6bbc7700  1 mgr load Constructed class from module: pg_autoscaler
2023-10-03T03:50:22.893+0000 7f9d5fbaf700  0 [pg_autoscaler INFO root] _maybe_adjust
2023-10-03T03:50:22.906+0000 7f9d6bbc7700  0 [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
2023-10-03T03:50:22.907+0000 7f9d6bbc7700  1 mgr load Constructed class from module: progress
2023-10-03T03:50:22.917+0000 7f9d5ebad700  0 [progress INFO root] Loading...
2023-10-03T03:50:22.918+0000 7f9d5ebad700  0 [progress INFO root] Loaded [<progress.module.GhostEvent object at 0x7f9d6bdb0208>, <progress.module.GhostEvent object at 0x7f9d6bdb0240>, <progress.module.GhostEvent object at 0x7f9d6bdb0278>, <progress.module.GhostEvent object at 0x7f9d6bdb02b0>, <progress.module.GhostEvent object at 0x7f9d6bdb02e8>, <progress.module.GhostEvent object at 0x7f9d6bdb0320>, <progress.module.GhostEvent object at 0x7f9d6bdb0358>, <progress.module.GhostEvent object at 0x7f9d6bdb0390>, <progress.module.GhostEvent object at 0x7f9d6bdb03c8>, <progress.module.GhostEvent object at 0x7f9d6bdb0400>, <progress.module.GhostEvent object at 0x7f9d6bdb0438>, <progress.module.GhostEvent object at 0x7f9d6bdb0470>, <progress.module.GhostEvent object at 0x7f9d6bdb04a8>, <progress.module.GhostEvent object at 0x7f9d6bdb04e0>, <progress.module.GhostEvent object at 0x7f9d6bdb0518>] historic events
2023-10-03T03:50:22.930+0000 7f9d5ebad700  0 [progress INFO root] Loaded OSDMap, ready.
2023-10-03T03:50:22.931+0000 7f9d6bbc7700  0 [prometheus DEBUG root] setting log level based on debug_mgr: INFO (2/5)
2023-10-03T03:50:22.932+0000 7f9d6bbc7700  1 mgr load Constructed class from module: prometheus
2023-10-03T03:50:22.933+0000 7f9d5dbab700  0 [prometheus INFO root] server_addr: :: server_port: 9283
2023-10-03T03:50:22.933+0000 7f9d5dbab700  0 [prometheus INFO root] Cache enabled
2023-10-03T03:50:22.935+0000 7f9d6bbc7700  0 [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
2023-10-03T03:50:22.935+0000 7f9d5cba9700  0 [prometheus INFO root] starting metric collection thread
2023-10-03T03:50:22.939+0000 7f9d5dbab700  0 [prometheus INFO root] Starting engine...
2023-10-03T03:50:22.939+0000 7f9d5dbab700  0 [prometheus INFO cherrypy.error] [03/Oct/2023:03:50:22] ENGINE Bus STARTING
2023-10-03T03:50:22.947+0000 7f9d583a0700  0 [rbd_support INFO root] recovery thread starting
2023-10-03T03:50:22.947+0000 7f9d583a0700  0 [rbd_support INFO root] starting setup
2023-10-03T03:50:22.957+0000 7f9d6bbc7700  1 mgr load Constructed class from module: rbd_support
2023-10-03T03:50:22.959+0000 7f9d6bbc7700  0 [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
2023-10-03T03:50:22.959+0000 7f9d6bbc7700  1 mgr load Constructed class from module: restful
2023-10-03T03:50:22.960+0000 7f9d56b5d700  0 [restful INFO root] server_addr: :: server_port: 8003
2023-10-03T03:50:22.961+0000 7f9d583a0700  0 [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
2023-10-03T03:50:22.969+0000 7f9d55b1b700  0 [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
2023-10-03T03:50:22.971+0000 7f9d56b5d700  0 [restful WARNING root] server not running: no certificate configured
2023-10-03T03:50:22.981+0000 7f9d6bbc7700  0 [snap_schedule DEBUG root] setting log level: DEBUG
2023-10-03T03:50:22.983+0000 7f9d54b19700  0 [rbd_support INFO root] PerfHandler: starting
2023-10-03T03:50:22.984+0000 7f9d6bbc7700  0 [snap_schedule DEBUG snap_schedule.fs.schedule_client] using uri file:///*3:/snap_db_v0.db?vfs=ceph
2023-10-03T03:50:22.991+0000 7f9d54318700  0 [rbd_support INFO root] TaskHandler: starting
2023-10-03T03:50:22.998+0000 7f9d583a0700  0 [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
2023-10-03T03:50:23.051+0000 7f9d531d6700  0 [rbd_support INFO root] TrashPurgeScheduleHandler: starting
2023-10-03T03:50:23.055+0000 7f9d583a0700  0 [rbd_support INFO root] setup complete
2023-10-03T03:50:23.097+0000 7f9d6bbc7700  0 [snap_schedule DEBUG snap_schedule.fs.schedule_client] No legacy schedule DB found in cephfs
2023-10-03T03:50:23.098+0000 7f9d5dbab700  0 [prometheus INFO cherrypy.error] [03/Oct/2023:03:50:23] ENGINE Serving on http://:::9283
2023-10-03T03:50:23.098+0000 7f9d5dbab700  0 [prometheus INFO cherrypy.error] [03/Oct/2023:03:50:23] ENGINE Bus STARTED
2023-10-03T03:50:23.098+0000 7f9d5dbab700  0 [prometheus INFO root] Engine started.
2023-10-03T03:50:23.126+0000 7f9d6bbc7700  0 [snap_schedule DEBUG snap_schedule.fs.schedule_client] locking db connection for cephfs
2023-10-03T03:50:23.126+0000 7f9d6bbc7700  0 [snap_schedule DEBUG snap_schedule.fs.schedule_client] locked db connection for cephfs
2023-10-03T03:50:23.142+0000 7f9d63bb7700  0 [devicehealth INFO root] Check health
2023-10-03T03:50:23.169+0000 7f9d6bbc7700  0 [snap_schedule DEBUG snap_schedule.fs.schedule_client] SnapDB on cephfs changed for /volumes/subvolgrp/test_snap, updating next Timer
2023-10-03T03:50:23.188+0000 7f9d6bbc7700  0 [snap_schedule DEBUG snap_schedule.fs.schedule_client] Creating new snapshot timer for /volumes/subvolgrp/test_snap
2023-10-03T03:50:23.188+0000 7f9d6bbc7700  0 [snap_schedule DEBUG snap_schedule.fs.schedule_client] Will snapshot /volumes/subvolgrp/test_snap in fs cephfs in 1177s
2023-10-03T03:50:23.188+0000 7f9d6bbc7700  0 [snap_schedule DEBUG snap_schedule.fs.schedule_client] SnapDB on cephfs changed for /volumes/subvolgrp/test, updating next Timer
2023-10-03T03:50:23.204+0000 7f9d6bbc7700  0 [snap_schedule DEBUG snap_schedule.fs.schedule_client] Creating new snapshot timer for /volumes/subvolgrp/test
2023-10-03T03:50:23.204+0000 7f9d6bbc7700  0 [snap_schedule DEBUG snap_schedule.fs.schedule_client] Will snapshot /volumes/subvolgrp/test in fs cephfs in 1777s
2023-10-03T03:50:23.204+0000 7f9d6bbc7700  0 [snap_schedule DEBUG snap_schedule.fs.schedule_client] SnapDB on cephfs changed for /volumes/subvolgrp/test_snap_18_2, updating next Timer
2023-10-03T03:50:23.219+0000 7f9d6bbc7700  0 [snap_schedule DEBUG snap_schedule.fs.schedule_client] Creating new snapshot timer for /volumes/subvolgrp/test_snap_18_2
2023-10-03T03:50:23.219+0000 7f9d6bbc7700  0 [snap_schedule DEBUG snap_schedule.fs.schedule_client] Will snapshot /volumes/subvolgrp/test_snap_18_2 in fs cephfs in 1477s
2023-10-03T03:50:23.219+0000 7f9d6bbc7700  0 [snap_schedule DEBUG snap_schedule.fs.schedule_client] SnapDB on cephfs changed for /volumes/subvolgrp/test_snap_2, updating next Timer
2023-10-03T03:50:23.234+0000 7f9d6bbc7700  0 [snap_schedule DEBUG snap_schedule.fs.schedule_client] Creating new snapshot timer for /volumes/subvolgrp/test_snap_2
2023-10-03T03:50:23.235+0000 7f9d6bbc7700  0 [snap_schedule DEBUG snap_schedule.fs.schedule_client] Will snapshot /volumes/subvolgrp/test_snap_2 in fs cephfs in 3277s
2023-10-03T03:50:23.235+0000 7f9d6bbc7700  0 [snap_schedule DEBUG snap_schedule.fs.schedule_client] unlocking db connection for cephfs
2023-10-03T03:50:23.235+0000 7f9d6bbc7700  0 [snap_schedule DEBUG snap_schedule.fs.schedule_client] unlocked db connection for cephfs
2023-10-03T03:50:23.235+0000 7f9d6bbc7700  1 mgr load Constructed class from module: snap_schedule
2023-10-03T03:50:23.236+0000 7f9d6bbc7700  0 [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
2023-10-03T03:50:23.236+0000 7f9d6bbc7700  1 mgr load Constructed class from module: status
2023-10-03T03:50:23.238+0000 7f9d6bbc7700  0 [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
2023-10-03T03:50:23.239+0000 7f9d6bbc7700  1 mgr load Constructed class from module: telemetry
2023-10-03T03:50:23.241+0000 7f9d6bbc7700  0 [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
2023-10-03T03:50:23.263+0000 7f9d6bbc7700  0 [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
2023-10-03T03:50:23.263+0000 7f9d6bbc7700  0 [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
2023-10-03T03:50:23.263+0000 7f9d6bbc7700  1 mgr load Constructed class from module: volumes
2023-10-03T03:50:23.275+0000 7f9d399a3700 -1 client.0 error registering admin socket command: (17) File exists
2023-10-03T03:50:23.275+0000 7f9d399a3700 -1 client.0 error registering admin socket command: (17) File exists
2023-10-03T03:50:23.275+0000 7f9d399a3700 -1 client.0 error registering admin socket command: (17) File exists
2023-10-03T03:50:23.275+0000 7f9d399a3700 -1 client.0 error registering admin socket command: (17) File exists
2023-10-03T03:50:23.275+0000 7f9d399a3700 -1 client.0 error registering admin socket command: (17) File exists
2023-10-03T03:50:23.276+0000 7f9d3c9a9700 -1 client.0 error registering admin socket command: (17) File exists
2023-10-03T03:50:23.276+0000 7f9d3c9a9700 -1 client.0 error registering admin socket command: (17) File exists
2023-10-03T03:50:23.276+0000 7f9d3c9a9700 -1 client.0 error registering admin socket command: (17) File exists
2023-10-03T03:50:23.276+0000 7f9d3c9a9700 -1 client.0 error registering admin socket command: (17) File exists
2023-10-03T03:50:23.276+0000 7f9d3c9a9700 -1 client.0 error registering admin socket command: (17) File exists
2023-10-03T03:50:23.879+0000 7f9d6b3c6700  0 log_channel(cluster) log [DBG] : pgmap v3: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:50:24.835+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v4: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:50:26.836+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v5: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:50:28.837+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v6: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail; 206 B/s wr, 0 op/s
2023-10-03T03:50:30.839+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v7: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail; 147 B/s wr, 0 op/s
2023-10-03T03:50:32.838+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v8: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail; 114 B/s wr, 0 op/s
2023-10-03T03:50:34.840+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v9: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail; 93 B/s wr, 0 op/s
2023-10-03T03:50:36.841+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v10: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail; 85 B/s wr, 0 op/s
2023-10-03T03:50:38.842+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v11: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail; 85 B/s wr, 0 op/s
2023-10-03T03:50:40.844+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v12: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:50:42.845+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v13: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:50:44.847+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v14: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:50:46.848+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v15: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:50:48.848+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v16: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:50:50.850+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v17: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:50:52.851+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v18: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:50:53.002+0000 7f9d5531a700  0 [snap_schedule INFO mgr_util] scanning for idle connections..
2023-10-03T03:50:53.002+0000 7f9d5531a700  0 [snap_schedule INFO mgr_util] cleaning up connections: []
2023-10-03T03:50:53.241+0000 7f9d411b2700  0 [volumes INFO mgr_util] scanning for idle connections..
2023-10-03T03:50:53.242+0000 7f9d411b2700  0 [volumes INFO mgr_util] cleaning up connections: []
2023-10-03T03:50:53.254+0000 7f9d3d1aa700  0 [volumes INFO mgr_util] scanning for idle connections..
2023-10-03T03:50:53.254+0000 7f9d3d1aa700  0 [volumes INFO mgr_util] cleaning up connections: []
2023-10-03T03:50:53.258+0000 7f9d3a1a4700  0 [volumes INFO mgr_util] scanning for idle connections..
2023-10-03T03:50:53.258+0000 7f9d3a1a4700  0 [volumes INFO mgr_util] cleaning up connections: []
2023-10-03T03:50:54.853+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v19: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:50:56.853+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v20: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:50:58.854+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v21: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:51:00.857+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v22: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:51:02.858+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v23: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:51:04.860+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v24: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail; 22 KiB/s rd, 0 B/s wr, 36 op/s
2023-10-03T03:51:06.860+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v25: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail; 22 KiB/s rd, 0 B/s wr, 36 op/s
2023-10-03T03:51:08.861+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v26: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail; 27 KiB/s rd, 0 B/s wr, 44 op/s
2023-10-03T03:51:10.863+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v27: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail; 32 KiB/s rd, 0 B/s wr, 52 op/s
2023-10-03T03:51:12.864+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v28: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail; 32 KiB/s rd, 0 B/s wr, 52 op/s
2023-10-03T03:51:14.866+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v29: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail; 32 KiB/s rd, 0 B/s wr, 52 op/s
2023-10-03T03:51:16.866+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v30: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail; 9.7 KiB/s rd, 0 B/s wr, 16 op/s
2023-10-03T03:51:18.868+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v31: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail; 12 KiB/s rd, 0 B/s wr, 19 op/s
2023-10-03T03:51:20.869+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v32: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail; 9.0 KiB/s rd, 0 B/s wr, 14 op/s
2023-10-03T03:51:22.869+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v33: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail; 4.0 KiB/s rd, 0 B/s wr, 6 op/s
2023-10-03T03:51:22.884+0000 7f9d65bbb700  0 [balancer INFO root] Optimize plan auto_2023-10-03_03:51:22
2023-10-03T03:51:22.884+0000 7f9d65bbb700  0 [balancer INFO root] Mode upmap, max misplaced 0.050000
2023-10-03T03:51:22.885+0000 7f9d65bbb700  0 [balancer INFO root] do_upmap
2023-10-03T03:51:22.885+0000 7f9d65bbb700  0 [balancer INFO root] pools ['cephfs_data', 'default.rgw.meta', '.mgr', '.rgw.root', 'default.rgw.log', 'cephfs_metadata', 'default.rgw.control']
2023-10-03T03:51:22.887+0000 7f9d65bbb700  0 [balancer INFO root] prepared 0/10 changes
2023-10-03T03:51:22.930+0000 7f9d5fbaf700  0 [pg_autoscaler INFO root] _maybe_adjust
2023-10-03T03:51:22.960+0000 7f9d5fbaf700  0 [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 8641440645120
2023-10-03T03:51:22.960+0000 7f9d5fbaf700  0 [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.5944563604427403e-07 of space, bias 1.0, pg target 4.783369081328221e-05 quantized to 1 (current 1)
2023-10-03T03:51:22.961+0000 7f9d5fbaf700  0 [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 8641440645120
2023-10-03T03:51:22.961+0000 7f9d5fbaf700  0 [pg_autoscaler INFO root] Pool 'cephfs_data' root_id -1 using 0.6541124264663055 of space, bias 1.0, pg target 196.23372793989165 quantized to 256 (current 128)
2023-10-03T03:51:22.962+0000 7f9d5fbaf700  0 [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 8641440645120
2023-10-03T03:51:22.962+0000 7f9d5fbaf700  0 [pg_autoscaler INFO root] Pool 'cephfs_metadata' root_id -1 using 0.0001589434267277695 of space, bias 4.0, pg target 0.06612046551875211 quantized to 16 (current 16)
2023-10-03T03:51:22.963+0000 7f9d5fbaf700  0 [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 8641440645120
2023-10-03T03:51:22.963+0000 7f9d5fbaf700  0 [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0668360032312621e-09 of space, bias 1.0, pg target 1.1095094433605126e-07 quantized to 32 (current 32)
2023-10-03T03:51:22.963+0000 7f9d5fbaf700  0 [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 8641440645120
2023-10-03T03:51:22.963+0000 7f9d5fbaf700  0 [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 1.285202370309838e-09 of space, bias 1.0, pg target 1.3366104651222314e-07 quantized to 32 (current 32)
2023-10-03T03:51:22.964+0000 7f9d5fbaf700  0 [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 8641440645120
2023-10-03T03:51:22.964+0000 7f9d5fbaf700  0 [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
2023-10-03T03:51:22.964+0000 7f9d5fbaf700  0 [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 8641440645120
2023-10-03T03:51:22.964+0000 7f9d5fbaf700  0 [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 32 (current 32)
2023-10-03T03:51:22.974+0000 7f9d55b1b700  0 [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
2023-10-03T03:51:23.041+0000 7f9d5531a700  0 [snap_schedule INFO mgr_util] scanning for idle connections..
2023-10-03T03:51:23.042+0000 7f9d5531a700  0 [snap_schedule INFO mgr_util] cleaning up connections: []
2023-10-03T03:51:23.050+0000 7f9d531d6700  0 [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
2023-10-03T03:51:23.242+0000 7f9d411b2700  0 [volumes INFO mgr_util] scanning for idle connections..
2023-10-03T03:51:23.242+0000 7f9d411b2700  0 [volumes INFO mgr_util] cleaning up connections: []
2023-10-03T03:51:23.255+0000 7f9d3d1aa700  0 [volumes INFO mgr_util] scanning for idle connections..
2023-10-03T03:51:23.255+0000 7f9d3d1aa700  0 [volumes INFO mgr_util] cleaning up connections: []
2023-10-03T03:51:23.258+0000 7f9d3a1a4700  0 [volumes INFO mgr_util] scanning for idle connections..
2023-10-03T03:51:23.259+0000 7f9d3a1a4700  0 [volumes INFO mgr_util] cleaning up connections: []
2023-10-03T03:51:24.871+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v34: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail; 4.0 KiB/s rd, 0 B/s wr, 6 op/s
2023-10-03T03:51:26.871+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v35: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail; 4.0 KiB/s rd, 0 B/s wr, 6 op/s
2023-10-03T03:51:28.873+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v36: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail; 4.0 KiB/s rd, 0 B/s wr, 6 op/s
2023-10-03T03:51:30.874+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v37: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail; 2.0 KiB/s rd, 0 B/s wr, 3 op/s
2023-10-03T03:51:32.874+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v38: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:51:34.877+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v39: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail; 8.0 KiB/s rd, 0 B/s wr, 13 op/s
2023-10-03T03:51:36.877+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v40: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail; 8.0 KiB/s rd, 0 B/s wr, 13 op/s
2023-10-03T03:51:38.879+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v41: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail; 8.0 KiB/s rd, 0 B/s wr, 13 op/s
2023-10-03T03:51:40.880+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v42: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail; 8.0 KiB/s rd, 0 B/s wr, 13 op/s
2023-10-03T03:51:42.881+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v43: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail; 8.0 KiB/s rd, 0 B/s wr, 13 op/s
2023-10-03T03:51:44.883+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v44: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail; 8.0 KiB/s rd, 0 B/s wr, 13 op/s
2023-10-03T03:51:46.884+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v45: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:51:48.885+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v46: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:51:50.887+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v47: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:51:52.888+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v48: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:51:53.043+0000 7f9d5531a700  0 [snap_schedule INFO mgr_util] scanning for idle connections..
2023-10-03T03:51:53.043+0000 7f9d5531a700  0 [snap_schedule INFO mgr_util] cleaning up connections: []
2023-10-03T03:51:53.244+0000 7f9d411b2700  0 [volumes INFO mgr_util] scanning for idle connections..
2023-10-03T03:51:53.244+0000 7f9d411b2700  0 [volumes INFO mgr_util] cleaning up connections: []
2023-10-03T03:51:53.256+0000 7f9d3d1aa700  0 [volumes INFO mgr_util] scanning for idle connections..
2023-10-03T03:51:53.256+0000 7f9d3d1aa700  0 [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f9d57b88a58>)]
2023-10-03T03:51:53.257+0000 7f9d3d1aa700  0 [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
2023-10-03T03:51:53.260+0000 7f9d3a1a4700  0 [volumes INFO mgr_util] scanning for idle connections..
2023-10-03T03:51:53.260+0000 7f9d3a1a4700  0 [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f9d57b88860>)]
2023-10-03T03:51:53.261+0000 7f9d3a1a4700  0 [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
2023-10-03T03:51:54.890+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v49: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail; 255 B/s wr, 0 op/s
2023-10-03T03:51:56.890+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v50: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail; 255 B/s wr, 0 op/s
2023-10-03T03:51:58.892+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v51: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail; 341 B/s wr, 0 op/s
2023-10-03T03:52:00.893+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v52: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail; 341 B/s wr, 0 op/s
2023-10-03T03:52:02.894+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v53: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail; 341 B/s wr, 0 op/s
2023-10-03T03:52:04.896+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v54: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail; 341 B/s wr, 0 op/s
2023-10-03T03:52:06.896+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v55: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail; 85 B/s wr, 0 op/s
2023-10-03T03:52:08.898+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v56: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail; 13 KiB/s rd, 85 B/s wr, 22 op/s
2023-10-03T03:52:10.899+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v57: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail; 16 KiB/s rd, 0 B/s wr, 26 op/s
2023-10-03T03:52:12.899+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v58: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail; 16 KiB/s rd, 0 B/s wr, 26 op/s
2023-10-03T03:52:14.901+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v59: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail; 32 KiB/s rd, 0 B/s wr, 52 op/s
2023-10-03T03:52:16.901+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v60: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail; 32 KiB/s rd, 0 B/s wr, 52 op/s
2023-10-03T03:52:18.903+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v61: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail; 32 KiB/s rd, 0 B/s wr, 52 op/s
2023-10-03T03:52:20.904+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v62: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail; 18 KiB/s rd, 0 B/s wr, 30 op/s
2023-10-03T03:52:22.895+0000 7f9d65bbb700  0 [balancer INFO root] Optimize plan auto_2023-10-03_03:52:22
2023-10-03T03:52:22.895+0000 7f9d65bbb700  0 [balancer INFO root] Mode upmap, max misplaced 0.050000
2023-10-03T03:52:22.895+0000 7f9d65bbb700  0 [balancer INFO root] do_upmap
2023-10-03T03:52:22.896+0000 7f9d65bbb700  0 [balancer INFO root] pools ['cephfs_data', '.mgr', 'default.rgw.control', 'cephfs_metadata', 'default.rgw.log', 'default.rgw.meta', '.rgw.root']
2023-10-03T03:52:22.898+0000 7f9d65bbb700  0 [balancer INFO root] prepared 0/10 changes
2023-10-03T03:52:22.904+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v63: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail; 16 KiB/s rd, 0 B/s wr, 26 op/s
2023-10-03T03:52:23.005+0000 7f9d55b1b700  0 [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
2023-10-03T03:52:23.014+0000 7f9d5fbaf700  0 [pg_autoscaler INFO root] _maybe_adjust
2023-10-03T03:52:23.043+0000 7f9d5fbaf700  0 [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 8641440645120
2023-10-03T03:52:23.043+0000 7f9d5fbaf700  0 [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.5944563604427403e-07 of space, bias 1.0, pg target 4.783369081328221e-05 quantized to 1 (current 1)
2023-10-03T03:52:23.044+0000 7f9d5fbaf700  0 [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 8641440645120
2023-10-03T03:52:23.044+0000 7f9d5fbaf700  0 [pg_autoscaler INFO root] Pool 'cephfs_data' root_id -1 using 0.6541124264663055 of space, bias 1.0, pg target 196.23372793989165 quantized to 256 (current 128)
2023-10-03T03:52:23.044+0000 7f9d5fbaf700  0 [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 8641440645120
2023-10-03T03:52:23.044+0000 7f9d5fbaf700  0 [pg_autoscaler INFO root] Pool 'cephfs_metadata' root_id -1 using 0.00015894352671109814 of space, bias 4.0, pg target 0.06612050711181683 quantized to 16 (current 16)
2023-10-03T03:52:23.045+0000 7f9d5fbaf700  0 [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 8641440645120
2023-10-03T03:52:23.045+0000 7f9d5fbaf700  0 [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0668360032312621e-09 of space, bias 1.0, pg target 1.1095094433605126e-07 quantized to 32 (current 32)
2023-10-03T03:52:23.045+0000 7f9d5fbaf700  0 [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 8641440645120
2023-10-03T03:52:23.045+0000 7f9d5fbaf700  0 [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 1.285202370309838e-09 of space, bias 1.0, pg target 1.3366104651222314e-07 quantized to 32 (current 32)
2023-10-03T03:52:23.046+0000 7f9d5fbaf700  0 [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 8641440645120
2023-10-03T03:52:23.046+0000 7f9d5fbaf700  0 [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
2023-10-03T03:52:23.046+0000 7f9d5fbaf700  0 [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 8641440645120
2023-10-03T03:52:23.046+0000 7f9d5fbaf700  0 [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 32 (current 32)
2023-10-03T03:52:23.053+0000 7f9d5531a700  0 [snap_schedule INFO mgr_util] scanning for idle connections..
2023-10-03T03:52:23.053+0000 7f9d5531a700  0 [snap_schedule INFO mgr_util] cleaning up connections: []
2023-10-03T03:52:23.058+0000 7f9d531d6700  0 [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
2023-10-03T03:52:23.244+0000 7f9d411b2700  0 [volumes INFO mgr_util] scanning for idle connections..
2023-10-03T03:52:23.244+0000 7f9d411b2700  0 [volumes INFO mgr_util] cleaning up connections: []
2023-10-03T03:52:23.269+0000 7f9d3d1aa700  0 [volumes INFO mgr_util] scanning for idle connections..
2023-10-03T03:52:23.269+0000 7f9d3d1aa700  0 [volumes INFO mgr_util] cleaning up connections: []
2023-10-03T03:52:23.269+0000 7f9d3a1a4700  0 [volumes INFO mgr_util] scanning for idle connections..
2023-10-03T03:52:23.269+0000 7f9d3a1a4700  0 [volumes INFO mgr_util] cleaning up connections: []
2023-10-03T03:52:24.906+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v64: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail; 20 KiB/s rd, 0 B/s wr, 33 op/s
2023-10-03T03:52:26.907+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v65: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail; 4.0 KiB/s rd, 0 B/s wr, 6 op/s
2023-10-03T03:52:28.909+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v66: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail; 4.0 KiB/s rd, 0 B/s wr, 6 op/s
2023-10-03T03:52:30.910+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v67: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail; 4.0 KiB/s rd, 0 B/s wr, 6 op/s
2023-10-03T03:52:32.911+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v68: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail; 4.0 KiB/s rd, 0 B/s wr, 6 op/s
2023-10-03T03:52:34.912+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v69: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail; 4.0 KiB/s rd, 0 B/s wr, 6 op/s
2023-10-03T03:52:36.913+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v70: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:52:38.915+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v71: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:52:40.916+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v72: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:52:42.917+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v73: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:52:44.919+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v74: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:52:46.920+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v75: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:52:48.921+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v76: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:52:50.922+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v77: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:52:52.923+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v78: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:52:53.060+0000 7f9d5531a700  0 [snap_schedule INFO mgr_util] scanning for idle connections..
2023-10-03T03:52:53.060+0000 7f9d5531a700  0 [snap_schedule INFO mgr_util] cleaning up connections: []
2023-10-03T03:52:53.245+0000 7f9d411b2700  0 [volumes INFO mgr_util] scanning for idle connections..
2023-10-03T03:52:53.245+0000 7f9d411b2700  0 [volumes INFO mgr_util] cleaning up connections: []
2023-10-03T03:52:53.270+0000 7f9d3d1aa700  0 [volumes INFO mgr_util] scanning for idle connections..
2023-10-03T03:52:53.270+0000 7f9d3d1aa700  0 [volumes INFO mgr_util] cleaning up connections: []
2023-10-03T03:52:53.271+0000 7f9d3a1a4700  0 [volumes INFO mgr_util] scanning for idle connections..
2023-10-03T03:52:53.271+0000 7f9d3a1a4700  0 [volumes INFO mgr_util] cleaning up connections: []
2023-10-03T03:52:54.925+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v79: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:52:56.926+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v80: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:52:58.927+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v81: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:53:00.928+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v82: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:53:02.928+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v83: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:53:04.930+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v84: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:53:06.930+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v85: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:53:08.932+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v86: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:53:10.933+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v87: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:53:12.934+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v88: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:53:14.936+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v89: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:53:16.936+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v90: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:53:18.938+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v91: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:53:20.939+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v92: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:53:22.906+0000 7f9d65bbb700  0 [balancer INFO root] Optimize plan auto_2023-10-03_03:53:22
2023-10-03T03:53:22.906+0000 7f9d65bbb700  0 [balancer INFO root] Mode upmap, max misplaced 0.050000
2023-10-03T03:53:22.907+0000 7f9d65bbb700  0 [balancer INFO root] do_upmap
2023-10-03T03:53:22.907+0000 7f9d65bbb700  0 [balancer INFO root] pools ['.rgw.root', 'default.rgw.control', 'cephfs_data', 'default.rgw.meta', 'default.rgw.log', '.mgr', 'cephfs_metadata']
2023-10-03T03:53:22.909+0000 7f9d65bbb700  0 [balancer INFO root] prepared 0/10 changes
2023-10-03T03:53:22.940+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v93: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:53:23.016+0000 7f9d55b1b700  0 [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
2023-10-03T03:53:23.054+0000 7f9d5fbaf700  0 [pg_autoscaler INFO root] _maybe_adjust
2023-10-03T03:53:23.061+0000 7f9d5531a700  0 [snap_schedule INFO mgr_util] scanning for idle connections..
2023-10-03T03:53:23.061+0000 7f9d5531a700  0 [snap_schedule INFO mgr_util] cleaning up connections: []
2023-10-03T03:53:23.068+0000 7f9d531d6700  0 [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
2023-10-03T03:53:23.090+0000 7f9d5fbaf700  0 [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 8641440645120
2023-10-03T03:53:23.090+0000 7f9d5fbaf700  0 [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.5944563604427403e-07 of space, bias 1.0, pg target 4.783369081328221e-05 quantized to 1 (current 1)
2023-10-03T03:53:23.090+0000 7f9d5fbaf700  0 [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 8641440645120
2023-10-03T03:53:23.090+0000 7f9d5fbaf700  0 [pg_autoscaler INFO root] Pool 'cephfs_data' root_id -1 using 0.6541124264663055 of space, bias 1.0, pg target 196.23372793989165 quantized to 256 (current 128)
2023-10-03T03:53:23.091+0000 7f9d5fbaf700  0 [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 8641440645120
2023-10-03T03:53:23.091+0000 7f9d5fbaf700  0 [pg_autoscaler INFO root] Pool 'cephfs_metadata' root_id -1 using 0.00015894352671109814 of space, bias 4.0, pg target 0.06612050711181683 quantized to 16 (current 16)
2023-10-03T03:53:23.092+0000 7f9d5fbaf700  0 [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 8641440645120
2023-10-03T03:53:23.092+0000 7f9d5fbaf700  0 [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0668360032312621e-09 of space, bias 1.0, pg target 1.1095094433605126e-07 quantized to 32 (current 32)
2023-10-03T03:53:23.092+0000 7f9d5fbaf700  0 [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 8641440645120
2023-10-03T03:53:23.092+0000 7f9d5fbaf700  0 [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 1.285202370309838e-09 of space, bias 1.0, pg target 1.3366104651222314e-07 quantized to 32 (current 32)
2023-10-03T03:53:23.093+0000 7f9d5fbaf700  0 [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 8641440645120
2023-10-03T03:53:23.093+0000 7f9d5fbaf700  0 [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
2023-10-03T03:53:23.093+0000 7f9d5fbaf700  0 [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 8641440645120
2023-10-03T03:53:23.093+0000 7f9d5fbaf700  0 [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 32 (current 32)
2023-10-03T03:53:23.245+0000 7f9d411b2700  0 [volumes INFO mgr_util] scanning for idle connections..
2023-10-03T03:53:23.245+0000 7f9d411b2700  0 [volumes INFO mgr_util] cleaning up connections: []
2023-10-03T03:53:23.270+0000 7f9d3d1aa700  0 [volumes INFO mgr_util] scanning for idle connections..
2023-10-03T03:53:23.270+0000 7f9d3d1aa700  0 [volumes INFO mgr_util] cleaning up connections: []
2023-10-03T03:53:23.270+0000 7f9d3a1a4700  0 [volumes INFO mgr_util] scanning for idle connections..
2023-10-03T03:53:23.270+0000 7f9d3a1a4700  0 [volumes INFO mgr_util] cleaning up connections: []
2023-10-03T03:53:24.941+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v94: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:53:26.943+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v95: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:53:28.945+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v96: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:53:30.945+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v97: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:53:32.946+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v98: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:53:34.948+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v99: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:53:36.949+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v100: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:53:38.951+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v101: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:53:40.951+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v102: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:53:42.952+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v103: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:53:44.954+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v104: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:53:46.954+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v105: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:53:48.956+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v106: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:53:50.956+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v107: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:53:52.956+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v108: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:53:53.067+0000 7f9d5531a700  0 [snap_schedule INFO mgr_util] scanning for idle connections..
2023-10-03T03:53:53.067+0000 7f9d5531a700  0 [snap_schedule INFO mgr_util] cleaning up connections: []
2023-10-03T03:53:53.245+0000 7f9d411b2700  0 [volumes INFO mgr_util] scanning for idle connections..
2023-10-03T03:53:53.245+0000 7f9d411b2700  0 [volumes INFO mgr_util] cleaning up connections: []
2023-10-03T03:53:53.270+0000 7f9d3d1aa700  0 [volumes INFO mgr_util] scanning for idle connections..
2023-10-03T03:53:53.271+0000 7f9d3d1aa700  0 [volumes INFO mgr_util] cleaning up connections: []
2023-10-03T03:53:53.271+0000 7f9d3a1a4700  0 [volumes INFO mgr_util] scanning for idle connections..
2023-10-03T03:53:53.271+0000 7f9d3a1a4700  0 [volumes INFO mgr_util] cleaning up connections: []
2023-10-03T03:53:54.958+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v109: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:53:56.959+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v110: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:53:58.961+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v111: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:54:00.962+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v112: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:54:02.962+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v113: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:54:04.964+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v114: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:54:06.965+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v115: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:54:08.967+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v116: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:54:10.968+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v117: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:54:12.968+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v118: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:54:14.971+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v119: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:54:16.972+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v120: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:54:18.974+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v121: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:54:20.975+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v122: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:54:22.919+0000 7f9d65bbb700  0 [balancer INFO root] Optimize plan auto_2023-10-03_03:54:22
2023-10-03T03:54:22.919+0000 7f9d65bbb700  0 [balancer INFO root] Mode upmap, max misplaced 0.050000
2023-10-03T03:54:22.919+0000 7f9d65bbb700  0 [balancer INFO root] do_upmap
2023-10-03T03:54:22.919+0000 7f9d65bbb700  0 [balancer INFO root] pools ['cephfs_metadata', 'default.rgw.meta', '.rgw.root', 'default.rgw.control', '.mgr', 'default.rgw.log', 'cephfs_data']
2023-10-03T03:54:22.921+0000 7f9d65bbb700  0 [balancer INFO root] prepared 0/10 changes
2023-10-03T03:54:23.003+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v123: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:54:23.024+0000 7f9d55b1b700  0 [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
2023-10-03T03:54:23.073+0000 7f9d5531a700  0 [snap_schedule INFO mgr_util] scanning for idle connections..
2023-10-03T03:54:23.073+0000 7f9d5531a700  0 [snap_schedule INFO mgr_util] cleaning up connections: []
2023-10-03T03:54:23.092+0000 7f9d531d6700  0 [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
2023-10-03T03:54:23.114+0000 7f9d5fbaf700  0 [pg_autoscaler INFO root] _maybe_adjust
2023-10-03T03:54:23.163+0000 7f9d5fbaf700  0 [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 8641440645120
2023-10-03T03:54:23.171+0000 7f9d5fbaf700  0 [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.5944563604427403e-07 of space, bias 1.0, pg target 4.783369081328221e-05 quantized to 1 (current 1)
2023-10-03T03:54:23.171+0000 7f9d5fbaf700  0 [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 8641440645120
2023-10-03T03:54:23.171+0000 7f9d5fbaf700  0 [pg_autoscaler INFO root] Pool 'cephfs_data' root_id -1 using 0.6541124264663055 of space, bias 1.0, pg target 196.23372793989165 quantized to 256 (current 128)
2023-10-03T03:54:23.172+0000 7f9d5fbaf700  0 [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 8641440645120
2023-10-03T03:54:23.172+0000 7f9d5fbaf700  0 [pg_autoscaler INFO root] Pool 'cephfs_metadata' root_id -1 using 0.00015894352671109814 of space, bias 4.0, pg target 0.06612050711181683 quantized to 16 (current 16)
2023-10-03T03:54:23.172+0000 7f9d5fbaf700  0 [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 8641440645120
2023-10-03T03:54:23.172+0000 7f9d5fbaf700  0 [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0668360032312621e-09 of space, bias 1.0, pg target 1.1095094433605126e-07 quantized to 32 (current 32)
2023-10-03T03:54:23.173+0000 7f9d5fbaf700  0 [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 8641440645120
2023-10-03T03:54:23.173+0000 7f9d5fbaf700  0 [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 1.285202370309838e-09 of space, bias 1.0, pg target 1.3366104651222314e-07 quantized to 32 (current 32)
2023-10-03T03:54:23.173+0000 7f9d5fbaf700  0 [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 8641440645120
2023-10-03T03:54:23.173+0000 7f9d5fbaf700  0 [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
2023-10-03T03:54:23.174+0000 7f9d5fbaf700  0 [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 8641440645120
2023-10-03T03:54:23.174+0000 7f9d5fbaf700  0 [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 32 (current 32)
2023-10-03T03:54:23.247+0000 7f9d411b2700  0 [volumes INFO mgr_util] scanning for idle connections..
2023-10-03T03:54:23.247+0000 7f9d411b2700  0 [volumes INFO mgr_util] cleaning up connections: []
2023-10-03T03:54:23.272+0000 7f9d3d1aa700  0 [volumes INFO mgr_util] scanning for idle connections..
2023-10-03T03:54:23.272+0000 7f9d3d1aa700  0 [volumes INFO mgr_util] cleaning up connections: []
2023-10-03T03:54:23.272+0000 7f9d3a1a4700  0 [volumes INFO mgr_util] scanning for idle connections..
2023-10-03T03:54:23.273+0000 7f9d3a1a4700  0 [volumes INFO mgr_util] cleaning up connections: []
2023-10-03T03:54:25.005+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v124: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:54:27.006+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v125: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:54:29.008+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v126: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:54:31.008+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v127: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:54:33.009+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v128: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:54:35.011+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v129: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:54:37.011+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v130: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:54:39.013+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v131: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:54:41.013+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v132: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:54:43.014+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v133: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:54:45.015+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v134: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:54:47.016+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v135: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:54:49.018+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v136: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:54:51.018+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v137: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:54:53.019+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v138: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:54:53.081+0000 7f9d5531a700  0 [snap_schedule INFO mgr_util] scanning for idle connections..
2023-10-03T03:54:53.081+0000 7f9d5531a700  0 [snap_schedule INFO mgr_util] cleaning up connections: []
2023-10-03T03:54:53.247+0000 7f9d411b2700  0 [volumes INFO mgr_util] scanning for idle connections..
2023-10-03T03:54:53.247+0000 7f9d411b2700  0 [volumes INFO mgr_util] cleaning up connections: []
2023-10-03T03:54:53.272+0000 7f9d3d1aa700  0 [volumes INFO mgr_util] scanning for idle connections..
2023-10-03T03:54:53.272+0000 7f9d3d1aa700  0 [volumes INFO mgr_util] cleaning up connections: []
2023-10-03T03:54:53.272+0000 7f9d3a1a4700  0 [volumes INFO mgr_util] scanning for idle connections..
2023-10-03T03:54:53.272+0000 7f9d3a1a4700  0 [volumes INFO mgr_util] cleaning up connections: []
2023-10-03T03:54:55.021+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v139: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:54:57.022+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v140: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:54:59.024+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v141: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:55:01.024+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v142: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:55:03.025+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v143: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:55:05.027+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v144: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:55:07.027+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v145: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:55:09.030+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v146: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:55:11.031+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v147: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:55:13.031+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v148: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:55:15.033+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v149: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:55:17.034+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v150: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:55:19.036+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v151: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:55:21.037+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v152: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:55:22.930+0000 7f9d65bbb700  0 [balancer INFO root] Optimize plan auto_2023-10-03_03:55:22
2023-10-03T03:55:22.930+0000 7f9d65bbb700  0 [balancer INFO root] Mode upmap, max misplaced 0.050000
2023-10-03T03:55:22.930+0000 7f9d65bbb700  0 [balancer INFO root] do_upmap
2023-10-03T03:55:22.930+0000 7f9d65bbb700  0 [balancer INFO root] pools ['cephfs_metadata', 'default.rgw.control', 'default.rgw.log', 'cephfs_data', '.rgw.root', '.mgr', 'default.rgw.meta']
2023-10-03T03:55:22.933+0000 7f9d65bbb700  0 [balancer INFO root] prepared 0/10 changes
2023-10-03T03:55:23.032+0000 7f9d55b1b700  0 [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
2023-10-03T03:55:23.039+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v153: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:55:23.093+0000 7f9d5531a700  0 [snap_schedule INFO mgr_util] scanning for idle connections..
2023-10-03T03:55:23.093+0000 7f9d5531a700  0 [snap_schedule INFO mgr_util] cleaning up connections: []
2023-10-03T03:55:23.115+0000 7f9d531d6700  0 [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
2023-10-03T03:55:23.186+0000 7f9d5fbaf700  0 [pg_autoscaler INFO root] _maybe_adjust
2023-10-03T03:55:23.211+0000 7f9d5fbaf700  0 [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 8641440645120
2023-10-03T03:55:23.211+0000 7f9d5fbaf700  0 [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.5944563604427403e-07 of space, bias 1.0, pg target 4.783369081328221e-05 quantized to 1 (current 1)
2023-10-03T03:55:23.211+0000 7f9d5fbaf700  0 [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 8641440645120
2023-10-03T03:55:23.211+0000 7f9d5fbaf700  0 [pg_autoscaler INFO root] Pool 'cephfs_data' root_id -1 using 0.6541124264663055 of space, bias 1.0, pg target 196.23372793989165 quantized to 256 (current 128)
2023-10-03T03:55:23.212+0000 7f9d5fbaf700  0 [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 8641440645120
2023-10-03T03:55:23.212+0000 7f9d5fbaf700  0 [pg_autoscaler INFO root] Pool 'cephfs_metadata' root_id -1 using 0.00015894352671109814 of space, bias 4.0, pg target 0.06612050711181683 quantized to 16 (current 16)
2023-10-03T03:55:23.212+0000 7f9d5fbaf700  0 [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 8641440645120
2023-10-03T03:55:23.212+0000 7f9d5fbaf700  0 [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0668360032312621e-09 of space, bias 1.0, pg target 1.1095094433605126e-07 quantized to 32 (current 32)
2023-10-03T03:55:23.213+0000 7f9d5fbaf700  0 [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 8641440645120
2023-10-03T03:55:23.213+0000 7f9d5fbaf700  0 [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 1.285202370309838e-09 of space, bias 1.0, pg target 1.3366104651222314e-07 quantized to 32 (current 32)
2023-10-03T03:55:23.213+0000 7f9d5fbaf700  0 [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 8641440645120
2023-10-03T03:55:23.213+0000 7f9d5fbaf700  0 [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
2023-10-03T03:55:23.214+0000 7f9d5fbaf700  0 [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 8641440645120
2023-10-03T03:55:23.214+0000 7f9d5fbaf700  0 [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 32 (current 32)
2023-10-03T03:55:23.248+0000 7f9d411b2700  0 [volumes INFO mgr_util] scanning for idle connections..
2023-10-03T03:55:23.248+0000 7f9d411b2700  0 [volumes INFO mgr_util] cleaning up connections: []
2023-10-03T03:55:23.273+0000 7f9d3d1aa700  0 [volumes INFO mgr_util] scanning for idle connections..
2023-10-03T03:55:23.273+0000 7f9d3d1aa700  0 [volumes INFO mgr_util] cleaning up connections: []
2023-10-03T03:55:23.274+0000 7f9d3a1a4700  0 [volumes INFO mgr_util] scanning for idle connections..
2023-10-03T03:55:23.274+0000 7f9d3a1a4700  0 [volumes INFO mgr_util] cleaning up connections: []
2023-10-03T03:55:25.041+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v154: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:55:27.042+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v155: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:55:29.044+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v156: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:55:31.043+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v157: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:55:33.044+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v158: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:55:35.046+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v159: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:55:37.046+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v160: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:55:39.048+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v161: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:55:41.049+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v162: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:55:43.050+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v163: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:55:45.052+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v164: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:55:47.052+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v165: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:55:49.054+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v166: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:55:51.055+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v167: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:55:53.055+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v168: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:55:53.098+0000 7f9d5531a700  0 [snap_schedule INFO mgr_util] scanning for idle connections..
2023-10-03T03:55:53.098+0000 7f9d5531a700  0 [snap_schedule INFO mgr_util] cleaning up connections: []
2023-10-03T03:55:53.247+0000 7f9d411b2700  0 [volumes INFO mgr_util] scanning for idle connections..
2023-10-03T03:55:53.247+0000 7f9d411b2700  0 [volumes INFO mgr_util] cleaning up connections: []
2023-10-03T03:55:53.273+0000 7f9d3d1aa700  0 [volumes INFO mgr_util] scanning for idle connections..
2023-10-03T03:55:53.273+0000 7f9d3d1aa700  0 [volumes INFO mgr_util] cleaning up connections: []
2023-10-03T03:55:53.273+0000 7f9d3a1a4700  0 [volumes INFO mgr_util] scanning for idle connections..
2023-10-03T03:55:53.273+0000 7f9d3a1a4700  0 [volumes INFO mgr_util] cleaning up connections: []
2023-10-03T03:55:55.057+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v169: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:55:57.059+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v170: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:55:59.061+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v171: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:56:01.062+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v172: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:56:03.062+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v173: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:56:05.064+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v174: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:56:07.065+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v175: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:56:09.067+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v176: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:56:11.068+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v177: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:56:13.068+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v178: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:56:15.070+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v179: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:56:17.071+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v180: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
2023-10-03T03:56:19.073+0000 7f9d6a3c4700  0 log_channel(cluster) log [DBG] : pgmap v181: 273 pgs: 273 active+clean; 1.7 TiB data, 5.2 TiB used, 2.7 TiB / 7.9 TiB avail
[root@storagenode-1 ~]#
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux