osd reweight vs osd crush reweight

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all,

we are running a 144 osds ceph cluster and a couple of osd are >80% full.

This is the general situation:

 osdmap e29344: 144 osds: 144 up, 144 in
      pgmap v48302229: 42064 pgs, 18 pools, 60132 GB data, 15483 kobjects
            173 TB used, 90238 GB / 261 TB avail

We are currenty mitigating the problem using osd reweight but the more we read about this problem the more our doubts abouts using osd crush reweight increases.
Actually, we do not have plans to buy new hardware.

Our main question is: what if the re-weighted osd restart and get the original weight are the data going back?

How to correcly face this kind of situation?

Many thanks

Simone



--
Simone Spinelli <simone.spinelli@xxxxxxxx>
Università di Pisa
Settore Rete, Telecomunicazioni e Fonia - Serra
Direzione Edilizia e Telecomunicazioni
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux