RE : Balance data on near full osd warning or error

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

thank you it's rebalancing now :)



________________________________________
De : Eric Eastman [eric0e@xxxxxxx]
Date d'envoi : mercredi 23 octobre 2013 01:19
À : HURTEVENT VINCENT; ceph-users@xxxxxxxxxxxxxx
Objet : Re:  Balance data on near full osd warning or error

Hello,
What I have used to rebalance my cluster is:

ceph osd reweight-by-utilization


>we're using a small Ceph cluster with 8 nodes, each 4 osds. People are
using it
>through instances and volumes in a Openstack platform.
>
>We're facing a HEALTH_ERR with full or near full osds :
>
> cluster 5942e110-ea2f-4bac-80f7-243fe3e35732
>   health HEALTH_ERR 1 full osd(s); 13 near full osd(s)
>   monmap e1: 3 mons at
{0=192.168.73.131:6789/0,1=192.168.73.135:6789/0,2=192.168.73.140:6789/0}
,
>election epoch 2974, quorum 0,1,2 0,1,2
 >  osdmap e4127: 32 osds: 32 up, 32 in full
  >   pgmap v6055899: 10304 pgs: 10304 active+clean; 12444 GB data,
24953 GB used,
>4840 GB / 29793 GB avail
 >  mdsmap e792: 1/1/1 up {0=2=up:active}, 2 up:standby

Eric
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux