On Sat, Dec 11, 2021 at 9:11 PM huxiaoyu@xxxxxxxxxxxx <huxiaoyu@xxxxxxxxxxxx> wrote: > > Concerning very large file recovery process, are there any solutions to alleviate the negative impact? Otherwise we may have to limit file size to an acceptable level ... > If you can afford losing mtime update, you can modify mds code to not scan all objects. > > ________________________________ > huxiaoyu@xxxxxxxxxxxx > > > From: Yan, Zheng > Date: 2021-12-11 06:42 > To: huxiaoyu@xxxxxxxxxxxx > CC: ceph-users > Subject: Re: CephFS single file size limit and performance impact > On Sat, Dec 11, 2021 at 2:21 AM huxiaoyu@xxxxxxxxxxxx > <huxiaoyu@xxxxxxxxxxxx> wrote: > > > > Dear Ceph experts, > > > > I encounter a use case wherein the size of a single file may go beyound 50TB, and would like to know whether CephFS can support a single file with size over 50TB? Furthermore, if multiple clients, say 50, want to access (read/modify) this big file, do we expect any performance issues, e.g. something like a big lock on the whole file. I wonder whether Cephfs supports the so-called parallel feature like multiple clients can read/write different parts of the same big file... > > > > Comments, suggestions, experience are highly appreciated, > > > > The problem is file recovery. (If a client opens the file in write > mode disconnect abnormally, mds need to probe the file's objects, to > recover mtime and file size). operations such as stat(2) hang the file > is in recovery. For very large file, its recovery process may take a > long time. > > Kind regards, > > > > Samuel > > > > > > > > huxiaoyu@xxxxxxxxxxxx > > _______________________________________________ > > ceph-users mailing list -- ceph-users@xxxxxxx > > To unsubscribe send an email to ceph-users-leave@xxxxxxx > _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx