Re: cephfs toofull

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, 29 Aug 2016 12:51:55 +0530 gjprabu wrote:

> Hi Chrishtian,
> 
> 
> 
>             Sorry for subject and thanks for your reply,
> 
> 
> 
> > That's incredibly small in terms of OSD numbers, how many hosts? What replication size? 
> 
>     Total host 5.
> 
>     Replicated size : 2
>
At this replication size you need to act and replace/add OSDs NOW.
The next OSD failure will result in data loss.
 
So your RAW space is about 16TB, leaving you with 8TB of usable space. 

Which doesn't mesh with your "df", showing the ceph FS with 11TB used...

> 
> 
> >  And also quite large in terms of OSD size, especially with this configuration.
> 
> 
> 
>     What is the recommended size.
> 
There is no simple recommended size, it depends on the size of your
cluster (number of OSDs), network speed, etc.

You want a single OSD (or host ideally) not contain more than 10% of your
capacity at the very least. 
Smaller/more is better as an individual OSD failure will have less impact
and take less time to recover.

> 
> 
> > Don't build a Ceph cluster that can't survive a node or OSD failure. 
> 
> > Set your near full and full ratios accordingly. 
> 
>  
> 
>      By default we have to build ceph cluster,  any other way to manage or what exact threshold to set for data size, please brief.  
> 

There are examples on how to set these ratios in the documentation.

In your case having 5 OSDs and 5 hosts, a single OSD or host failure will
cost you 20% of your capacity (it may be more of course due to uneven PG
distribution). 

So your mon_osd_nearfull_ratio should have been something like 0.75 and
the moment that WARN got triggered you needed to add more OSDs.

> 
> 
> > And the obvious solution for your current problem is of course to re-add 
> 
> > the dead OSD if possible or to add more OSDs (way more OSDs).
> 
> 
> 
>     I understand, but have space eve though its though error.  
> 
You don't have enough space within the confines of your settings.

You could increase the osd_backfill_full_ratio but the next stop after
that is the full ratio and when you reach that the cluster will stop
entirely. 
So don't do that, (re-)add OSDs.

Christian
> 
> 
> Regards
> 
> Prabu GJ
> 
> 
> ---- On Mon, 29 Aug 2016 12:30:21 +0530 Christian Balzer <chibi@xxxxxxx>wrote ---- 
> 
> 
> 
> 
> 
> 
> Hello, 
> 
> 
> 
> First of all, the subject is misleading. 
> 
> It doesn't matter if you're using CephFS, the toofull status is something 
> 
> that OSDs are in. 
> 
> 
> 
> On Mon, 29 Aug 2016 12:06:21 +0530 gjprabu wrote: 
> 
> 
> 
> > 
> 
> > 
> 
> > Hi All, 
> 
> > 
> 
> > 
> 
> > 
> 
> > We are new with cephfs and we have 5 OSD and each size has 3.3TB. 
> 
> 
> 
> That's incredibly small in terms of OSD numbers, how many hosts? 
> 
> What replication size? 
> 
> Something doesn't add up here, at 5x 3.3TB you would have 16.5TB and with 
> 
> 12TB used that would imply a replication of 1... 
> 
> 
> 
> And also quite large in terms of OSD size, especially with this 
> 
> configuration. 
> 
> 
> 
> >As of now data has been stored around 12 TB size, unfortunately osd5 went down and while remapped+backfill below error is showing even though we have around 2TB free spaces. Kindly provide the solution to solve this issue. 
> 
> 
> 
> Don't build a Ceph cluster that can't survive a node or OSD failure. 
> 
> Set your near full and full ratios accordingly. 
> 
> 
> 
> And the obvious solution for your current problem is of course to re-add 
> 
> the dead OSD if possible or to add more OSDs (way more OSDs). 
> 
> 
> 
> Christian 
> 
> > 
> 
> > 
> 
> > 
> 
> > cluster b466e09c-f7ae-4e89-99a7-99d30eba0a13 
> 
> > 
> 
> > health HEALTH_WARN 
> 
> > 
> 
> > 51 pgs backfill_toofull 
> 
> > 
> 
> > 57 pgs degraded 
> 
> > 
> 
> > 57 pgs stuck unclean 
> 
> > 
> 
> > 57 pgs undersized 
> 
> > 
> 
> > recovery 1139974/17932626 objects degraded (6.357%) 
> 
> > 
> 
> > recovery 1139974/17932626 objects misplaced (6.357%) 
> 
> > 
> 
> > 3 near full osd(s) 
> 
> > 
> 
> > monmap e2: 3 mons at {intcfs-mon1=192.168.113.113:6789/0,intcfs-mon2=192.168.113.114:6789/0,intcfs-mon3=192.168.113.72:6789/0} 
> 
> > 
> 
> > election epoch 10, quorum 0,1,2 intcfs-mon3,intcfs-mon1,intcfs-mon2 
> 
> > 
> 
> > fsmap e26: 1/1/1 up {0=intcfs-osd1=up:active}, 1 up:standby 
> 
> > 
> 
> > osdmap e2349: 5 osds: 4 up, 4 in; 57 remapped pgs 
> 
> > 
> 
> > flags sortbitwise 
> 
> > 
> 
> > pgmap v681178: 564 pgs, 3 pools, 5811 GB data, 8756 kobjects 
> 
> > 
> 
> > 11001 GB used, 2393 GB / 13394 GB avail 
> 
> > 
> 
> > 1139974/17932626 objects degraded (6.357%) 
> 
> > 
> 
> > 1139974/17932626 objects misplaced (6.357%) 
> 
> > 
> 
> > 506 active+clean 
> 
> > 
> 
> > 51 active+undersized+degraded+remapped+backfill_toofull 
> 
> > 
> 
> > 6 active+undersized+degraded+remapped 
> 
> > 
> 
> > 1 active+clean+scrubbing 
> 
> > 
> 
> > 
> 
> > 
> 
> > 
> 
> > 
> 
> > 192.168.113.113,192.168.113.114,192.168.113.72:6789:/ ceph 14T 11T 2.4T 83% /home/build/cephfsdownloads 
> 
> > 
> 
> > 
> 
> > 
> 
> > 
> 
> > 
> 
> > Regards 
> 
> > 
> 
> > Prabu GJ 
> 
> > 
> 
> > 
> 
> 
> 
> 
> 


-- 
Christian Balzer        Network/Systems Engineer                
chibi@xxxxxxx   	Global OnLine Japan/Rakuten Communications
http://www.gol.com/
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux