Re: cephfs max_file_size

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, May 23, 2023 at 3:28 AM Dietmar Rieder
<dietmar.rieder@xxxxxxxxxxx> wrote:
>
> Hi,
>
> can the cephfs "max_file_size" setting be changed at any point in the
> lifetime of a cephfs?
> Or is it critical for existing data if it is changed after some time? Is
> there anything to consider when changing, let's say, from 1TB (default)
> to 4TB ?

Larger files take longer to delete (the MDS has to issue a delete op
on every one of the objects that may exist), and longer to recover if
their client crashes and the MDS has to probe all the objects looking
for the actual size and mtime.
This is all throttled so it shouldn't break anything, we just want to
avoid the situation somebody ran into once where they accidentally
created a 1 exabyte RBD on their little 3-node cluster and then had to
suffer through "deleting" it. :D
-Greg

>
> We are running the latest Nautilus release, BTW.
>
> Thanks in advance
>    Dietmar
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux