full/near full ratio

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks Craig. That?s exactly what I was looking for.

?Jiten

On Sep 16, 2014, at 2:42 PM, Craig Lewis <clewis at centraldesktop.com> wrote:

> 
> 
> On Fri, Sep 12, 2014 at 4:35 PM, JIten Shah <jshah2005 at me.com> wrote:
> 
> 1. If we need to modify those numbers, do we need to update the values in ceph.conf and restart every OSD or we can run a command on MON, that will overwrite it?
> 
> That will work.  You can also update the values without a restart using:
> ceph tell mon.\* injectargs '--mon_osd_nearfull_ratio 0.85'
> 
> 
> You might also need to look at mon_osd_full_ratio, osd_backfill_full_ratio, osd_failsafe_full_ratio, and  osd_failsafe_nearfull_ratio.
> 
> Variables that start with mon should be sent to all the monitors (ceph tell mon.\* ...), variables that start with osd should be send to the osds (ceph tell osd.\* ...).
> 
>  
> 
> 2. What is the best way to get the OSD?s to work again, if we reach the full ration amount?  You can?t delete the data because read/write is blocked.
> 
> Add more OSDs.  Preferably before they become full, but it'll work if they're toofull.  It may take a while though, Ceph doesn't seem to weight which backfills should be done first, so it might take a while to get to the OSDs that are toofull.
> 
> Since not everybody has nodes and disks laying around, you can stop all of your writes, and bump the nearfull and full ratios.  I've bumped them while I was using ceph osd reweight, and had some toofull disks that wanted to exchange PGs.  Keep in mind that Ceph stops when the percentage is > than toofull, so don't set full_ratio to 0.99.  You really don't want to fill up your disks.
> 
> If all else fails (or you get a disk down to 0 kB free) you can manually delete some PGs on disk.  This is fairly risky, and prone to human error causing data loss.  You'll have to figure out the best ones to delete, and you'll want to make sure you don't delete every replica of the PG.  You'll want to disable backfilling (ceph osd set nobackfill), otherwise Ceph will repair things back to toofull.
> 
>  
> 
> 3. If we add new OSD?s, will it start rebalancing the OSD?s or do I need to trigger it manually and how?
> 
> Adding and starting the OSDs will start rebalancing.  The expected location will change as soon as you add the OSD to the crushmap.  Shortly after the OSD starts, it will begin updating to make reality match expectations.  For most people, that happens in a single step, with ceph-deploy or a Config Management tool.
> _______________________________________________
> ceph-users mailing list
> ceph-users at lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20140916/ca19c150/attachment.htm>


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux