On Wed, Sep 5, 2012 at 9:22 AM, Tommi Virtanen <tv@xxxxxxxxxxx> wrote: > On Tue, Sep 4, 2012 at 4:26 PM, Smart Weblications GmbH - Florian > Wiessner <f.wiessner@xxxxxxxxxxxxxxxxxxxxx> wrote: >> i set up a 3 node ceph cluster 0.48.1argonaut to test ceph-fs. >> >> i mount ceph via fuse, then i downloaded kernel tree and decompressed a few >> times, then stopping one osd (osd.1), afer a while of recovering, suddenly: >> >> tar: linux-3.5.3/drivers/media/video/zoran/zr36060.h: Kann write nicht >> ausführen: Auf dem Gerät ist kein Speicherplatz mehr verfügbar >> linux-3.5.3/drivers/media/video/zr364xx.c >> tar: linux-3.5.3/drivers/media/video/zr364xx.c: Kann write nicht ausführen: Auf >> dem Gerät ist kein Speicherplatz mehr verfügbar >> linux-3.5.3/drivers/memory/ > > Please provide English error messages when you share things with the > list. In this case I can figure out what the message is, but really, > we're all pattern matching animals and the specific strings in > /usr/include/asm-generic/errno.h are what we know. > >> no space left on device, but: >> >> 2012-09-04 18:46:38.242840 mon.0 [INF] pgmap v2883: 576 pgs: 512 active+clean, >> 64 active+recovering; 1250 MB data, 14391 MB used, 844 MB / 15236 MB avail; >> 36677/215076 degraded (17.053%) >> >> there is space left? > > Only 844 MB available, with the pseudo-random placement policies, > means you practically are out of space. > > It looks like you had only 15GB to begin with, and with typical > replication, that's <5GB usable space. That is dangerously small for > any real use; Ceph currently does not cope very well with running out > of space. In this particular case, one of the OSDs is more than 95% full and has been marked as full (which stops cluster IO) to prevent those catastrophic failures from occurring. If you look at the full output of ceph -s, you should see a warning about having OSDs full and near-full. -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html