=?eucgb2312_cn?q?=BB=D8=B8=B4=3A_Re=3A_CephFS_single_file_size_limit_and_performance_impact?=

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Maybe you can use RBD instead of cephfs to bypass the MDS? You can make your applications to directly read from or write to the RBD block devices.

________________________________
发件人: huxiaoyu@xxxxxxxxxxxx <huxiaoyu@xxxxxxxxxxxx>
发送时间: Saturday, December 11, 2021 9:11:53 PM
收件人: Yan, Zheng <ukernel@xxxxxxxxx>
抄送: ceph-users <ceph-users@xxxxxxx>
主题:  Re: CephFS single file size limit and performance impact

Concerning very large file recovery process, are there any solutions to alleviate the negative impact? Otherwise we may have to limit file size to an acceptable level ...




huxiaoyu@xxxxxxxxxxxx

From: Yan, Zheng
Date: 2021-12-11 06:42
To: huxiaoyu@xxxxxxxxxxxx
CC: ceph-users
Subject: Re:  CephFS single file size limit and performance impact
On Sat, Dec 11, 2021 at 2:21 AM huxiaoyu@xxxxxxxxxxxx
<huxiaoyu@xxxxxxxxxxxx> wrote:
>
> Dear Ceph experts,
>
> I encounter a use case wherein the size of a single file may go beyound 50TB, and would like to know whether CephFS can support a single file with size over 50TB? Furthermore, if multiple clients, say 50, want to access (read/modify) this big file, do we expect any performance issues, e.g. something like a big lock on the whole file. I wonder whether Cephfs supports the so-called parallel feature like multiple clients can read/write different parts of the same big file...
>
> Comments, suggestions, experience are highly appreciated,
>

The problem is file recovery.  (If a client opens the file in write
mode disconnect  abnormally, mds need to probe the file's objects, to
recover mtime and file size). operations such as stat(2) hang the file
is in recovery. For very large file,  its recovery process may take a
long time.
> Kind regards,
>
> Samuel
>
>
>
> huxiaoyu@xxxxxxxxxxxx
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux