Re: [EXTERN] Re: cephfs max_file_size

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 5/23/23 15:58, Gregory Farnum wrote:
On Tue, May 23, 2023 at 3:28 AM Dietmar Rieder
<dietmar.rieder@xxxxxxxxxxx> wrote:

Hi,

can the cephfs "max_file_size" setting be changed at any point in the
lifetime of a cephfs?
Or is it critical for existing data if it is changed after some time? Is
there anything to consider when changing, let's say, from 1TB (default)
to 4TB ?

Larger files take longer to delete (the MDS has to issue a delete op
on every one of the objects that may exist), and longer to recover if
their client crashes and the MDS has to probe all the objects looking
for the actual size and mtime.
This is all throttled so it shouldn't break anything, we just want to
avoid the situation somebody ran into once where they accidentally
created a 1 exabyte RBD on their little 3-node cluster and then had to
suffer through "deleting" it. :D
-Greg


Thanks for your detailed explanation.

Would it also be ok if we set the max to 5 TB create some big files (>1TB) and then set the max back to 1 TB? Would the big files then still be available and usable?

Best
   Dietmar

Attachment: OpenPGP_signature
Description: OpenPGP digital signature

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux