pool autoscale-status blank?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hey y'all,

I've got a cephadm cluster on 17.2.5, pretty basic setup. PG autoscaling is on for several pools and working fine, but in troubleshooting the autoscaler for a particular pool I noticed 'ceph osd pool autoscale-status' is now returning blank. I haven't looked at it in a while, but I think it worked at some point in the quincy release. Any suggestions? Let me know what info from the cluster would be helpful.

For reference, though I think there's nothing useful here:
root@ceph-mon0:~# ceph -s
 cluster:
 id: c146ea31-ef8b-42a7-8e89-fdef0b44d0a9
 health: HEALTH_OK

 services:
 mon: 5 daemons, quorum ceph-osd10,ceph-osd9,ceph-osd11,ceph-osd2,ceph-osd1 (age 22h)
 mgr: ceph-osd10.zkwlba(active, since 3m), standbys: ceph-mon0.zuwpfv, ceph-osd9.vbtmzi
 mds: 2/2 daemons up, 2 standby
 osd: 143 osds: 143 up (since 22h), 143 in (since 6w)
 rgw: 3 daemons active (3 hosts, 1 zones)

 data:
 volumes: 1/1 healthy
 pools: 30 pools, 5313 pgs
 objects: 34.40M objects, 149 TiB
 usage: 307 TiB used, 192 TiB / 500 TiB avail
 pgs: 5312 active+clean
 1 active+clean+scrubbing+deep

-Alex
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux