384 active+clean; 19 TiB data, 45 TiB used, 76 TiB / 122 TiB avail; 3.4 KiB/s rd, 573 KiB/s wr, 20 op/s Dec 23 11:58:25 c02 ceph-mgr: 2019-12-23 11:58:25.194 7f7d3a2f8700 0 log_channel(cluster) log [DBG] : pgmap v411196: 384 pgs: 384 active+clean; 19 TiB data, 45 TiB used, 76 TiB / 122 TiB avail; 3.3 KiB/s rd, 521 KiB/s wr, 20 op/s Dec 23 11:58:27 c02 ceph-mgr: 2019-12-23 11:58:27.196 7f7d3a2f8700 0 log_channel(cluster) log [DBG] : pgmap v411197: 384 pgs: 384 active+clean; 19 TiB data, 45 TiB used, 76 TiB / 122 TiB avail; 3.4 KiB/s rd, 237 KiB/s wr, 19 op/s Dec 23 11:58:29 c02 ceph-mgr: 2019-12-23 11:58:29.197 7f7d3a2f8700 0 log_channel(cluster) log [DBG] : pgmap v411198: 384 pgs: 384 active+clean; 19 TiB data, 45 TiB used, 76 TiB / 122 TiB avail; 3.2 KiB/s rd, 254 KiB/s wr, 17 op/s Dec 23 11:58:31 c02 ceph-mgr: 2019-12-23 11:58:31.199 7f7d3a2f8700 0 log_channel(cluster) log [DBG] : pgmap v411199: 384 pgs: 384 active+clean; 19 TiB data, 45 TiB used, 76 TiB / 122 TiB avail; 2.9 KiB/s rd, 258 KiB/s wr, 17 op/s _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com