On Thu, Jul 25, 2013 at 7:42 PM, Greg Chavez <greg.chavez@xxxxxxxxx> wrote: > Any idea how we tweak this? If I want to keep my ceph node root > volume at 85% used, that's my business, man. There are config options you can set. On the monitors they are "mon osd full ratio" and "mon osd nearfull ratio"; on the OSDs you may (not) want to change "osd failsafe full ratio" and "osd failsafe nearfull ratio". However, you should be *extremely careful* modifying these values. Linux local filesystems don't much like to get this full to begin with, and if you fill up an OSD enough that the local FS starts failing to perform writes your cluster will become extremely unhappy. The OSD works hard to prevent doing permanent damage, but its prevention mechanisms tend to involve stopping all work. You should also consider what happens if the cluster is that full and you lose a node. Recovering from situations where clusters get past these points tends to involve manually moving data and babysitting things for a while; the values are as low as they are in order to provide a safety net in case you actually do hit them. -Greg Software Engineer #42 @ http://inktank.com | http://ceph.com _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com