Re: What is the max size of cephfs (filesystem)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 6/20/22 10:18, Serkan Çoban wrote:
Currently the biggest HDD is 20TB. 1 exabyte means 50.000 OSD
cluster(without replication or EC)
AFAIK Cern did some tests using 5000 OSDs. I don't know any larger
clusters than Cern's.
So I am not saying it is impossible but it is very unlikely to grow a
single Ceph cluster to that size.
Maybe you should search for alternatives, like hdfs which I
know/worked with more than 50.000 HDDs without problems.

Pretty sure though that the ceph developers would be very helpful in resolving scaling issues if they might arise. Large scale tests are performed [1].

The question is: should you make it so big if it is possible at all? Do you need a single filesystem to store all your data? By having all your eggs in one basket, when a filesystem corruption would occur, you have a very prolonged outage on your whole fs. While by distributing this over differnt filesystems "only" a part is affected. With so many disks there is probably always some recovery going on. This might prevent the cluster from being HEALTH_OK, accumulating PG logs on the mons, etc. But if you would plan for this beforehand it might be doable.

Gr. Stefan

[1]: https://lists.ceph.io/hyperkitty/list/ceph-users@xxxxxxx/thread/M5J32SE7OMJJYVYBRUPDWY7HNJCPI7IG/
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux