Ceph osd Reweight command in octopus

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



We have a ceph octopus cluster running 15.2.6, its indicating a near full
osd which I can see is not weighted equally with the rest of the osds.  I
tried to do the usual "ceph osd reweight osd.0 0.95" to force it down a
little bit, but unlike the nautilus clusters, I see no data movement when
issuing the command.  If I run a ceph osd tree, it shows the reweight
setting, but no data movement appears to be occurring.  

 

Is there some new thing in ocotopus I am missing?  I looked through the
release notes for .7, .8 and .9 and didn't see any fixes that jumped out as
resolving a bug related to this.  The Octopus cluster was deployed using
ceph-ansible and upgraded to 15.2.6.  I plan to upgrade to 15.2.9 in the
coming month.

 

Any thoughts?

 

Regards,

-Brent

 

Existing Clusters:

Test: Ocotpus 15.2.5 ( all virtual on nvme )

US Production(HDD): Nautilus 14.2.11 with 11 osd servers, 3 mons, 4
gateways, 2 iscsi gateways

UK Production(HDD): Nautilus 14.2.11 with 18 osd servers, 3 mons, 4
gateways, 2 iscsi gateways

US Production(SSD): Nautilus 14.2.11 with 6 osd servers, 3 mons, 4 gateways,
2 iscsi gateways

UK Production(SSD): Octopus 15.2.6 with 5 osd servers, 3 mons, 4 gateways

 

 

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux