Is ceph osd reweight always safe to use?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

On Tue, 09 Sep 2014 01:25:17 -0400 JR wrote:

> Greetings
> 
> After running for a couple of hours, my attempt to re-balance a near ful
> disk has stopped with a stuck unclean error:
> 
Which is exactly what I warned you about below and what you should have
also taken away from fully reading the "Uneven OSD usage" thread.

This also should hammer my previous point about your current cluster
size/utilization home. Even with a better (don't expect perfect) data
distribution, loss of one node might well find you with a full OSD again. 

> root at osd45:~# ceph -s
>   cluster c8122868-27af-11e4-b570-52540004010f
>    health HEALTH_WARN 6 pgs backfilling; 6 pgs stuck unclean; recovery
> 13086/1158268 degraded (1.130%)
>    monmap e1: 3 mons at
> {osd42=10.7.7.142:6789/0,osd43=10.7.7.143:6789/0,osd45=10.7.7.145:6789/0},
> election epoch 80, quorum 0,1,2 osd42,osd43,osd45
>    osdmap e723: 8 osds: 8 up, 8 in
>     pgmap v543113: 640 pgs: 634 active+clean, 6
> active+remapped+backfilling; 2222 GB data, 2239 GB used, 1295 GB / 3535
> GB avail; 8268B/s wr, 0op/s; 13086/1158268 degraded (1.130%)
>    mdsmap e63: 1/1/1 up {0=osd42=up:active}, 3 up:standby
> 

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux