Re: Ceph Rebalance Issue

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> Op 3 juli 2016 om 10:34 schreef Roozbeh Shafiee <roozbeh.shafiee@xxxxxxxxx>:
> 
> 
> Hi list,
> 
> A few days ago one of my OSDs failed and I dropped out that but afterwards I got
> HEALTH_WARN until now. After turing off the OSD, the self-healing system started
> to rebalance data between other OSDs.
> 
> My question is: At the end of rebalancing, the process doesn’t complete and I get this message
> at the end of “ceph -s” output:
> 
> recovery io 1456 KB/s, 0 object/s
> 

Could you post the exact output of 'ceph -s'?

There is something more which needs to be shown.

'ceph health detail' also might tell you more.

Wido

> how can I back to HEALTH_OK situation again?
> 
> My cluster details are:
> 
> - 27 OSDs
> - 3 MONs
> - 2048 pg/pgs
> - Each OSD has 4 TB of space
> - CentOS 7.2 with 3.10 linux kernel
> - Ceph Hammer version
> 
> Thank you,
> Roozbeh_______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux