Re: Adjusting cephfs max_file_size

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 07/12/2018 05:37 PM, John Spray wrote:
On Thu, Jul 12, 2018 at 7:38 AM Chengguang Xu <cgxu519@xxxxxxx> wrote:
Hi guys,

Currently, we can arbitrarily change max file size in cephfs if the size is just larger than 65536.
Could we change it to only allowing increment? It seems there is no good reason to decrease the size.
What do you think?
If we made increases irreversible, then it would be a bit "scary" for
people to modify the setting at all -- it feels more user friendly to
let people change it both ways so that they can correct their mistake
if they set it too high.

While it's a bit annoying that someone can set a max_file_size that is
really smaller than their largest file, I think it's still useful to
be able to limit the size of new files.  We just need to make sure
it's clear to users that this is the maximum size for new files rather
than the actual largest file that exists.

One thing that I concern is when max_file_size is smaller than some existing
files then we probably cannot operate(read/write) exceeded data range.
Because sanity check will adjust real read/write size or even return error directly when offset larger than max_file_size. So it looks like a  kind of truncation and
maybe in some cases applications think the file is incomplete.

It would be better to record largest file size and do not allow setting max_file_size
smaller than that but I'm not so sure if it is worth the complexity.

Thanks,
Chengguang









--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux