Re: cephfs file size limit 0f 1.1TB?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, May 24, 2017 at 12:30 PM, John Spray <jspray@xxxxxxxxxx> wrote:
> On Wed, May 24, 2017 at 8:17 PM, Jake Grimmett <jog@xxxxxxxxxxxxxxxxx> wrote:
>> Hi John,
>> That's great, thank you so much for the advice.
>> Some of our users have massive files so this would have been a big block.
>>
>> Is there any particular reason for having a file size limit?
>
> Without the size limit, a user can create a file of arbitrary size
> (without necessarily writing any data to it), such that when the MDS
> came to e.g. delete it, it would have to do a ridiculously large
> number of operations to check if any of the objects within the range
> that could exist (according to the file size) really existed.
>
> The idea is that we don't want to prevent users creating files big
> enough to hold their data, but we don't want to let them just tell the
> system "oh hey this file that I never wrote anything to is totally an
> exabyte in size, have fun enumerating the objects when you try to
> delete it lol".

There's also the bit where the MDS needs to stat all the potential
object when it's trying to identify the "real" size of a file
following clients disappearing. I think you saw that recently, John?
Obviously it could be improved by limiting the total amount of
"not-there-yet" space we allow clients to grow a file, but for now I
think we double the max_file_size whenever it grows too close to the
limit. That one's more annoying to users than the delete because it
has to happen before they can access the file at all.
-Greg

>
> 1TB is a bit conservative these days -- that limit was probably set
> circa 10 years ago and maybe we should revist it.  As an datapoint,
> what's your largest file?
>
>> Would setting
>> max_file_size to 0 remove all limits?
>
> Nope, it would limit you to only creating empty files :-)
>
> It's a 64 bit field, so you can set it to something huge if you like.
>
> John
>
>>
>> Thanks again,
>>
>> Jake
>>
>> On 24 May 2017 19:45:52 BST, John Spray <jspray@xxxxxxxxxx> wrote:
>>>
>>> On Wed, May 24, 2017 at 7:41 PM, Brady Deetz <bdeetz@xxxxxxxxx> wrote:
>>>>
>>>>  Are there any repercussions to configuring this on an existing large fs?
>>>
>>>
>>> No.  It's just a limit that's enforced at the point of appending to
>>> files or setting their size, it doesn't affect how anything is stored.
>>>
>>> John
>>>
>>>>  On Wed, May 24, 2017 at 1:36 PM, John Spray <jspray@xxxxxxxxxx> wrote:
>>>>>
>>>>>
>>>>>  On Wed, May 24, 2017 at 7:19 PM, Jake Grimmett <jog@xxxxxxxxxxxxxxxxx>
>>>>>  wrote:
>>>>>>
>>>>>>  Dear All,
>>>>>>
>>>>>>  I've been testing out cephfs, and bumped into what appears to be an
>>>>>>  upper
>>>>>>  file size limit of ~1.1TB
>>>>>>
>>>>>>  e.g:
>>>>>>
>>>>>>  [root@cephfs1 ~]# time rsync --progress -av /ssd/isilon_melis.tar
>>>>>>  /ceph/isilon_melis.tar
>>>>>>  sending incremental file list
>>>>>>  isilon_melis.tar
>>>>>>  1099341824000  54%  237.51MB/s    1:02:05
>>>>>>  rsync: writefd_unbuffered failed to write 4 bytes to socket [sender]:
>>>>>>  Broken pipe (32)
>>>>>>  rsync: write failed on "/ceph/isilon_melis.tar": File too large (27)
>>>>>>  rsync error: error in file IO (code 11) at receiver.c(322)
>>>>>>  [receiver=3.0.9]
>>>>>>  rsync: connection unexpectedly closed (28 bytes received so far)
>>>>>>  [sender]
>>>>>>  rsync error: error in rsync protocol data stream (code 12) at
>>>>>> io.c(605)
>>>>>>  [sender=3.0.9]
>>>>>>
>>>>>>  Firstly, is this expected?
>>>>>
>>>>>
>>>>>  CephFS has a configurable maximum file size, it's 1TB by default.
>>>>>
>>>>>  Change it with:
>>>>>    ceph fs set <fs name> max_file_size <size in bytes>
>>>>>
>>>>>  John
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>>
>>>>>>  If not, then does anyone have any suggestions on where to start
>>>>>> digging?
>>>>>>
>>>>>>  I'm using erasure encoding (4+1, 50 x 8TB drives over 5 servers), with
>>>>>>  an
>>>>>>  nvme hot pool of 4 drives (2 x replication).
>>>>>>
>>>>>>  I've tried both Kraken (release), and the latest Luminous Dev.
>>>>>>
>>>>>>  many thanks,
>>>>>>
>>>>>>  Jake
>>>>>>  --
>>>>>>
>>>>>> ________________________________
>>>>>>
>>>>>>  ceph-users mailing list
>>>>>>  ceph-users@xxxxxxxxxxxxxx
>>>>>>  http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>>>
>>>>> ________________________________
>>>>>
>>>>>  ceph-users mailing list
>>>>>  ceph-users@xxxxxxxxxxxxxx
>>>>>  http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>>>
>>
>> --
>> Sent from my Android device with K-9 Mail. Please excuse my brevity.
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux