Re: How many pool for cephfs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]



On 1/24/24 10:08, Albert Shih wrote:

99.99% because I'm newbie with ceph and don't understand clearly how
the autorisation work with cephfs ;-)

I strongly recommend you to ask for a expierenced Ceph consultant that helps you design and setup your storage cluster.

It looks like you try to make design decisions that will heavily influence performance of the system.

If I say 20-30 it's because I currently have on my classic ZFS/NFS server
around 25 «datasets» exported to various server.

The next question is how would the "consumers" access the filesystem: Via NFS or mounted directly. Even with the second option you can separate client access via CephX keys as David already wrote.

Ok. I got for my ceph cluster two set of servers, first set are for
services (mgr,mon,etc.) with ssd and don't currently run any osd (but still
have 2 ssd not used), I also got a second set of server with HDD and 2 SSD. The data pool will be on
the second set (with HDD). Where should I run the MDS and on which osd ?

Do you intend to use the Ceph cluster only for archival storage?
Hwo large is your second set of Ceph nodes, how many HDDs in each? Do you intend to use the SSDs for the OSDs' RocksDB? Where do you plan to store the metadata pools for CephFS? They should be stored on fats media.

Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Amtsgericht Berlin-Charlottenburg - HRB 220009 B
Geschäftsführer: Peer Heinlein - Sitz: Berlin
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]

  Powered by Linux