Re: rsync kernel client cepfs mkstemp no space left on device

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Am 10.10.2016 um 11:33 schrieb John Spray:
> On Mon, Oct 10, 2016 at 9:05 AM, Hauke Homburg <hhomburg@xxxxxxxxxxxxxx> wrote:
>> Am 07.10.2016 um 17:37 schrieb Gregory Farnum:
>>> On Fri, Oct 7, 2016 at 7:15 AM, Hauke Homburg <hhomburg@xxxxxxxxxxxxxx> wrote:
>>>> Hello,
>>>>
>>>> I have a Ceph Cluster with 5 Server, and 40 OSD. Aktual on this Cluster
>>>> are 85GB Free Space, and the rsync dir has lots of Pictures and a Data
>>>> Volume of 40GB.
>>>>
>>>> The Linux is a Centos 7 and the Last stable Ceph. The Client is a Debian
>>>> 8 with Kernel 4 and the Cluster is with cephfs mounted.
>>>>
>>>> When i sync the Directory i see often the Message rsync mkstemp no space
>>>> left on device (28). At this Point i can touch a File in anotherDiretory
>>>> in the Cluster. In the Diretory i have ~ 630000 Files. Are this too much
>>>> Files?
>>> Yes, in recent releases CephFS limits you to 100k dentries in a single
>>> directory fragment. This *includes* the "stray" directories that files
>>> get moved into when you unlink them, and is intended to prevent issues
>>> with very large folders. It will stop being a problem once we enable
>>> automatic fragmenting (soon, hopefully).
>>> You can change that by changing the "mds bal fragment size max"
>>> config, but you're probably better off by figuring out if you've got
>>> an over-large directory or if you're deleting files faster than the
>>> cluster can keep up. There was a thread about this very recently and
>>> John included some details about tuning if you check the archives. :)
>>> -Greg
>> Hello,
>>
>> Thanks for the answer.
>> I enabled on the Cluster the mds bal frag = true Options.
>>
>> Today i read that i have to enable this option on the Client, too. With
>> a Fuse mount i can do it with the ceph Binary. I use the Kernel Module.
>> How can i do it there?
> mds_bal_frag is only a server side thing.  You do also need to do the
> "ceph fs set <name> allow_dirfrags true", which you can run from any
> client with an admin key (but again, this is a server side thing, not
> a client setting).
>
> Note that the reason directory fragmentation is not enabled by default
> is that it wasn't thoroughly tested ahead of Jewel, so there's a
> reason it requires a --yes-i-really-mean-it.
>
> John
>
>> Regards
>>
>> Hauke
>>
>> --
>> www.w3-creative.de
>>
>> www.westchat.de
>>
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hello,

Yesterday i found the correct Command Line to enable the allow_dirfrags:

ceph mds set allow_dirfrags true --yes-i-really-mean-it

I startet the Line on a CLient with Debian 8, Kernel 4 and ceph Client
10.2.3 installed ceph-fs-common and ceph-common.

Currently i rsync some TB with some Directory with more than 100K
Entries in it into the Ceph Cluster. For testing.

Do you know a Roadmap for the Directory Fragmentation to be stable?
ceph.com is currently offline.

Regards

Hauke





-- 
www.w3-creative.de

www.westchat.de


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux