Re: What is the max size of cephfs (filesystem)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Currently the biggest HDD is 20TB. 1 exabyte means 50.000 OSD
cluster(without replication or EC)
AFAIK Cern did some tests using 5000 OSDs. I don't know any larger
clusters than Cern's.
So I am not saying it is impossible but it is very unlikely to grow a
single Ceph cluster to that size.
Maybe you should search for alternatives, like hdfs which I
know/worked with more than 50.000 HDDs without problems.

On Mon, Jun 20, 2022 at 10:46 AM Arnaud M <arnaud.meauzoone@xxxxxxxxx> wrote:
>
> Hello to everyone
>
> I have looked on the internet but couldn't find an answer.
> Do you know the maximum size of a ceph filesystem ? Not the max size of a
> single file but the limit of the whole filesystem ?
>
> For example a quick search on zfs on google output :
> A ZFS file system can store up to *256 quadrillion zettabytes* (ZB).
>
> I would like to have the same answer with cephfs.
>
> And if there is a limit, where is this limit coded ? Is it hard-coded or is
> it configurable ?
>
> Let's say someone wants to have a cephfs up to ExaByte, would it be
> completely foolish or would the system, given enough mds and servers and
> everything needed, be usable ?
>
> Is there any other limit to a ceph filesystem ?
>
> All the best
>
> Arnaud
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux