Re: cephfs toofull

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Chrishtian,

            Sorry for subject and thanks for your reply,

> That's incredibly small in terms of OSD numbers, how many hosts? What replication size?
    Total host 5.
    Replicated size : 2

>  And also quite large in terms of OSD size, especially with this configuration.

    What is the recommended size.

> Don't build a Ceph cluster that can't survive a node or OSD failure.
> Set your near full and full ratios accordingly.
 
     By default we have to build ceph cluster,  any other way to manage or what exact threshold to set for data size, please brief.  

> And the obvious solution for your current problem is of course to re-add
> the dead OSD if possible or to add more OSDs (way more OSDs).

    I understand, but have space eve though its though error.  

Regards
Prabu GJ

---- On Mon, 29 Aug 2016 12:30:21 +0530 Christian Balzer <chibi@xxxxxxx>wrote ----


Hello,

First of all, the subject is misleading.
It doesn't matter if you're using CephFS, the toofull status is something
that OSDs are in.

On Mon, 29 Aug 2016 12:06:21 +0530 gjprabu wrote:

>
>
> Hi All,
>
>
>
> We are new with cephfs and we have 5 OSD and each size has 3.3TB.

That's incredibly small in terms of OSD numbers, how many hosts?
What replication size?
Something doesn't add up here, at 5x 3.3TB you would have 16.5TB and with
12TB used that would imply a replication of 1...

And also quite large in terms of OSD size, especially with this
configuration.

>As of now data has been stored around 12 TB size, unfortunately osd5 went down and while remapped+backfill below error is showing even though we have around 2TB free spaces. Kindly provide the solution to solve this issue.

Don't build a Ceph cluster that can't survive a node or OSD failure.
Set your near full and full ratios accordingly.

And the obvious solution for your current problem is of course to re-add
the dead OSD if possible or to add more OSDs (way more OSDs).

Christian
>
>
>
> cluster b466e09c-f7ae-4e89-99a7-99d30eba0a13
>
> health HEALTH_WARN
>
> 51 pgs backfill_toofull
>
> 57 pgs degraded
>
> 57 pgs stuck unclean
>
> 57 pgs undersized
>
> recovery 1139974/17932626 objects degraded (6.357%)
>
> recovery 1139974/17932626 objects misplaced (6.357%)
>
> 3 near full osd(s)
>
> monmap e2: 3 mons at {intcfs-mon1=192.168.113.113:6789/0,intcfs-mon2=192.168.113.114:6789/0,intcfs-mon3=192.168.113.72:6789/0}
>
> election epoch 10, quorum 0,1,2 intcfs-mon3,intcfs-mon1,intcfs-mon2
>
> fsmap e26: 1/1/1 up {0=intcfs-osd1=up:active}, 1 up:standby
>
> osdmap e2349: 5 osds: 4 up, 4 in; 57 remapped pgs
>
> flags sortbitwise
>
> pgmap v681178: 564 pgs, 3 pools, 5811 GB data, 8756 kobjects
>
> 11001 GB used, 2393 GB / 13394 GB avail
>
> 1139974/17932626 objects degraded (6.357%)
>
> 1139974/17932626 objects misplaced (6.357%)
>
> 506 active+clean
>
> 51 active+undersized+degraded+remapped+backfill_toofull
>
> 6 active+undersized+degraded+remapped
>
> 1 active+clean+scrubbing
>
>
>
>
>
> 192.168.113.113,192.168.113.114,192.168.113.72:6789:/ ceph 14T 11T 2.4T 83% /home/build/cephfsdownloads
>
>
>
>
>
> Regards
>
> Prabu GJ
>
>


--
Christian Balzer Network/Systems Engineer
chibi@xxxxxxx     Global OnLine Japan/Rakuten Communications

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux