Re: ceph-fs tests

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Am 05.09.2012 18:52, schrieb Gregory Farnum:
> On Wed, Sep 5, 2012 at 9:42 AM, Smart Weblications GmbH - Florian
> Wiessner <f.wiessner@xxxxxxxxxxxxxxxxxxxxx> wrote:
>> Am 05.09.2012 18:22, schrieb Tommi Virtanen:
>>> On Tue, Sep 4, 2012 at 4:26 PM, Smart Weblications GmbH - Florian
>>> Wiessner <f.wiessner@xxxxxxxxxxxxxxxxxxxxx> wrote:
>>>> i set up a 3 node ceph cluster 0.48.1argonaut to test ceph-fs.
>>>>
>>>> i mount ceph via fuse, then i downloaded kernel tree and decompressed a few
>>>> times, then stopping one osd (osd.1), afer a while of recovering, suddenly:
>>
>>>
>>> Please provide English error messages when you share things with the
>>> list. In this case I can figure out what the message is, but really,
>>> we're all pattern matching animals and the specific strings in
>>> /usr/include/asm-generic/errno.h are what we know.
>>>
>>
>> OK, will change locales.
>>
>>>> no space left on device, but:
>>>>
>>>> 2012-09-04 18:46:38.242840 mon.0 [INF] pgmap v2883: 576 pgs: 512 active+clean,
>>>> 64 active+recovering; 1250 MB data, 14391 MB used, 844 MB / 15236 MB avail;
>>>> 36677/215076 degraded (17.053%)
>>>>
>>>> there is space left?
>>>
>>> Only 844 MB available, with the pseudo-random placement policies,
>>> means you practically are out of space.
>>>
>>> It looks like you had only 15GB to begin with, and with typical
>>> replication, that's <5GB usable space. That is dangerously small for
>>> any real use; Ceph currently does not cope very well with running out
>>> of space.
>>>
>>
>> It is a test-cluster running on my thinkpad, its main purpose is to test cephfs,
>> there is no need for real space. I added osd.1 again, then after recovery the
>> problem went away. I forced this situation to check how cephfs will behave when
>> cluster is near-full, osd fails and ceph tries to recover until backfill fills
>> up other osds so ceph is full.
>>
>> I observed on the client that no IO was possible anymore so that the client was
>> unusable.
>>
>> Is there a smarter way to handle this? It is bad that cephfs then stalls, it
>> would be better if it just returns that there is no space left, but still allow
>> read access... can this be tuned somewhere?
> 
> What client were you using? I believe it does allow reads while full —
> but your client can pretty easily get itself into a situation where it
> needs to perform writes in order to continue doing reads.
> 

ceph-fuse argonaut 0.48.1

ls, mount, df -h etc all hanged, i had to reboot the client...

-- 

Mit freundlichen Grüßen,

Florian Wiessner

Smart Weblications GmbH
Martinsberger Str. 1
D-95119 Naila

fon.: +49 9282 9638 200
fax.: +49 9282 9638 205
24/7: +49 900 144 000 00 - 0,99 EUR/Min*
http://www.smart-weblications.de

--
Sitz der Gesellschaft: Naila
Geschäftsführer: Florian Wiessner
HRB-Nr.: HRB 3840 Amtsgericht Hof
*aus dem dt. Festnetz, ggf. abweichende Preise aus dem Mobilfunknetz
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux