Re: Adjusting cephfs max_file_size

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 07/18/2018 04:07 PM, Yan, Zheng wrote:
On Wed, Jul 18, 2018 at 3:48 PM cgxu519 <cgxu519@xxxxxxx> wrote:
On 07/18/2018 03:38 PM, Yan, Zheng wrote:
On Wed, Jul 18, 2018 at 11:29 AM cgxu519 <cgxu519@xxxxxxx> wrote:
On 07/13/2018 03:12 AM, Patrick Donnelly wrote:
On Thu, Jul 12, 2018 at 8:13 AM, John Spray <jspray@xxxxxxxxxx> wrote:
On Thu, Jul 12, 2018 at 2:57 PM cgxu519 <cgxu519@xxxxxxx> wrote:
On 07/12/2018 05:37 PM, John Spray wrote:
On Thu, Jul 12, 2018 at 7:38 AM Chengguang Xu <cgxu519@xxxxxxx> wrote:
Hi guys,

Currently, we can arbitrarily change max file size in cephfs if the size is just larger than 65536.
Could we change it to only allowing increment? It seems there is no good reason to decrease the size.
What do you think?
If we made increases irreversible, then it would be a bit "scary" for
people to modify the setting at all -- it feels more user friendly to
let people change it both ways so that they can correct their mistake
if they set it too high.

While it's a bit annoying that someone can set a max_file_size that is
really smaller than their largest file, I think it's still useful to
be able to limit the size of new files.  We just need to make sure
it's clear to users that this is the maximum size for new files rather
than the actual largest file that exists.
One thing that I concern is when max_file_size is smaller than some existing
files then we probably cannot operate(read/write) exceeded data range.
Because sanity check will adjust real read/write size or even return
error directly
when offset larger than max_file_size. So it looks like a  kind of
truncation and
maybe in some cases applications think the file is incomplete.
I haven't looked in detail, but hopefully we could fix that so that
the max_filesize() check is only enforced when exceeding the current
size of the file.  I agree that blocking overwrites in an existing big
file is a weird behaviour.
Agreed, issue here: http://tracker.ceph.com/issues/24894
Hello,

I've did some attempts to fix the issue in kernel client side,
however, considering both 32/64bit clients could be exist for same cephfs,
it seems taking bigger value between current file size and sb->s_maxbytes
is not always correct. Also some operations(e.g. buffered read) go into
generic VFS code, so adjusting validation condition there may impact others.
(I think local filesystems will be OK but not so sure about other
distributed filesystems)

IMO, maybe it's better to fix the issue in cephfs not kernel client. I
think at least
we should not allow changing max_file_size to smaller than current maximum
file size. At the same time, I think max_file_size should maintain in
filesystem level
not in MDSMap which is impacting all filesystems in cluster.

Please let me know if I'm missing something.

how about always set sb->s_maxbytes to MAX_LFS_FILESIZE. do the check
in cephfs code.
Is it OK for buffered write?
Buffered write is through ceph_write_iter(). we can do the check after
calling generic_write_checks()

Ah,, I misunderstood your meaning. I thought it as check in the mds code.
It sounds feasible, so I will have a try. Thanks for the idea!

Thanks,
Chengguang




--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux