Re: What is the max size of cephfs (filesystem)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



during cephalocon in peking 2018 we had talks about ceph cluster with more than 10,000 osds.

___________________________________

Clyso GmbH - Ceph Foundation Member

Am 20.06.22 um 11:05 schrieb Anthony D'Atri:

Currently the biggest HDD is 20TB.
According to news articles, HDDs up to 26TB are sampling. Mind you they’re SMR.  And for many applications having that much capacity behind a tired SATA interface is a serious bottleneck; I’ve seen deployments cap HDD size at 8TB because of this.  But I digress...

30TB SSDs (eg. Intel / Solidigm P5316) have been shipping for a while now.

1 exabyte means 50.000 OSD
cluster(without replication or EC)
AFAIK Cern did some tests using 5000 OSDs.
Bigbang III used 10800!  The CERN bigbang findings have been priceless contributions to Ceph scalability.

I don't know any larger clusters than Cern’s.
The bigbangs were all transient I think.  I would expect that there are clusters even larger than CERN’s production deployment in certain organizations that don’t talk about them.

So I am not saying it is impossible but it is very unlikely to grow a single Ceph cluster to that size.
In high school I couldn’t imagine using all 48KB on an Apple ][

640KB ought to be enough for anyone (apocryphal)

When I ran Dumpling 450x 3TB OSDs were among the larger clusters according to Inktank at the time.

Basically, never say never.


Maybe you should search for alternatives, like hdfs which I
know/worked with more than 50.000 HDDs without problems.
HDFS is a different beast FWIW.

On Mon, Jun 20, 2022 at 10:46 AM Arnaud M <arnaud.meauzoone@xxxxxxxxx> wrote:
Hello to everyone

I have looked on the internet but couldn't find an answer.
Do you know the maximum size of a ceph filesystem ? Not the max size of a
single file but the limit of the whole filesystem ?

For example a quick search on zfs on google output :
A ZFS file system can store up to *256 quadrillion zettabytes* (ZB).

I would like to have the same answer with cephfs.

And if there is a limit, where is this limit coded ? Is it hard-coded or is
it configurable ?

Let's say someone wants to have a cephfs up to ExaByte, would it be
completely foolish or would the system, given enough mds and servers and
everything needed, be usable ?

Is there any other limit to a ceph filesystem ?

All the best

Arnaud
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux