Re: Balance data on near full osd warning or error

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



http://ceph.com/docs/master/rados/operations/placement-groups/


2013/10/22 HURTEVENT VINCENT <vincent.hurtevent@xxxxxxxxxxxxx>
Hello,

we're using a small Ceph cluster with 8 nodes, each 4 osds. People are using it through instances and volumes in a Openstack platform.

We're facing a HEALTH_ERR with full or near full osds :

  cluster 5942e110-ea2f-4bac-80f7-243fe3e35732
   health HEALTH_ERR 1 full osd(s); 13 near full osd(s)
   monmap e1: 3 mons at {0=192.168.73.131:6789/0,1=192.168.73.135:6789/0,2=192.168.73.140:6789/0}, election epoch 2974, quorum 0,1,2 0,1,2
   osdmap e4127: 32 osds: 32 up, 32 in full
    pgmap v6055899: 10304 pgs: 10304 active+clean; 12444 GB data, 24953 GB used, 4840 GB / 29793 GB avail
   mdsmap e792: 1/1/1 up {0=2=up:active}, 2 up:standby

Here is the dd output on these osds :

/dev/sdc           932G    785G  147G  85% /data/ceph/osd/data/1
/dev/sdd           932G    879G   53G  95% /data/ceph/osd/data/2
/dev/sde           932G    765G  167G  83% /data/ceph/osd/data/3
/dev/sdf           932G    754G  178G  81% /data/ceph/osd/data/4
/dev/sdc           932G    799G  133G  86% /data/ceph/osd/data/6
/dev/sdd           932G    818G  114G  88% /data/ceph/osd/data/7
/dev/sde           932G    814G  118G  88% /data/ceph/osd/data/8
/dev/sdf           932G    801G  131G  86% /data/ceph/osd/data/9
/dev/sdc           932G    764G  168G  83% /data/ceph/osd/data/11
/dev/sdd           932G    840G   92G  91% /data/ceph/osd/data/12
/dev/sde           932G    699G  233G  76% /data/ceph/osd/data/13
/dev/sdf           932G    721G  211G  78% /data/ceph/osd/data/14
/dev/sdc           932G    778G  154G  84% /data/ceph/osd/data/16
/dev/sdd           932G    820G  112G  88% /data/ceph/osd/data/17
/dev/sde           932G    684G  248G  74% /data/ceph/osd/data/18
/dev/sdf           932G    763G  169G  82% /data/ceph/osd/data/19
/dev/sdc           932G    757G  175G  82% /data/ceph/osd/data/21
/dev/sdd           932G    715G  217G  77% /data/ceph/osd/data/22
/dev/sde           932G    762G  170G  82% /data/ceph/osd/data/23
/dev/sdf           932G    728G  204G  79% /data/ceph/osd/data/24
/dev/sdc           932G    841G   91G  91% /data/ceph/osd/data/26
/dev/sdd           932G    795G  137G  86% /data/ceph/osd/data/27
/dev/sde           932G    691G  241G  75% /data/ceph/osd/data/28
/dev/sdf           932G    772G  160G  83% /data/ceph/osd/data/29
/dev/sdc           932G    738G  195G  80% /data/ceph/osd/data/36
/dev/sdd           932G    803G  129G  87% /data/ceph/osd/data/37
/dev/sde           932G    783G  149G  85% /data/ceph/osd/data/38
/dev/sdf           932G    844G   88G  91% /data/ceph/osd/data/39
/dev/sdc           932G    885G   47G  96% /data/ceph/osd/data/31
/dev/sdd           932G    708G  224G  76% /data/ceph/osd/data/32
/dev/sde           932G    802G  130G  87% /data/ceph/osd/data/33
/dev/sdf           932G    862G   70G  93% /data/ceph/osd/data/34

Some osds are really nearly full whereas some are ok I think (below 80%).
Wer're losing GBs we thought usable. There is no custom CRUSHmap,

Is there a way to rebalance data between osds ? As I understand the documentation, we should do this by adding new osd(s), switch off highly used osd to force the cluster to rebuild data.

Is there a way to do this without addind osds ? We 're using ceph version 0.67.1.

Thank you,



_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



--
С уважением, Фасихов Ирек Нургаязович
Моб.: +79229045757
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux